Finally - an update to the AI workbench, and one hell of an update it is :) !
The home server farm has now been blessed with dedicated silicon, and it is an absolute beast: the NVIDIA RTX 5060 Ti 16GB. The old setup—which was painfully grinding through CPU inference at a mere 1 token/sec on 46GB of RAM—has officially been retired. After months of relying on remote APIs, lightning-fast local agentic workflows are finally a reality, all without seriously over-working my legacy workstation (and waiting eons for responses).
To clarify, my paid subscriptions to cloud LLMs will continue. However, with this capable local setup handling the bulk of my daily tasks, I expect to hit the dreaded "credits have expired" message much later in the month—if ever. At the very least, having this local horsepower certainly pushes off the need to upgrade to a pricier "Ultra" plan just to get more API tokens.
The ultimate objective here is to deploy an agentic setup for the home and family. I initially investigated OpenClaw for the orchestration layer, but its security model was simply too porous for a paranoid systems engineer. IronClaw, on the other hand, looks like a solid, secure candidate to serve as the local agent nexus. Before any of that software could run, I needed an inference engine that did not crawl.
The Hardware: The ThinkStation S30 Meets Blackwell
The host machine is my trusty Lenovo ThinkStation S30 (Machine Type 4351). It is a Sandy/Ivy Bridge Xeon platform, firmly anchored in the PCIe 3.0 era. Dropping a brand-new Blackwell-architecture GPU into a system this old is technically a severe mismatch, but the RTX 5060 Ti 16GB is the perfect fit for this specific niche.
Instead of chasing older, power-hungry professional cards like the RTX A4000, or settling for consumer Turing cards with split memory bottlenecks, the 5060 Ti offered exactly what the S30 needed:
- 16GB VRAM: The absolute minimum needed to comfortably fit modern 7B-9B parameter models.
- GDDR7 Bandwidth: Hitting 448 GB/s, drastically improving throughput over older 60-series cards.
- Native FP8/FP4 Support: Crucial for running highly quantized models efficiently.
Overcoming Legacy Architecture Limits
Getting a 2026 GPU to speak with a 2013 motherboard required some immediate troubleshooting. Initial boots into Linux Mint resulted in a wall of kernel panics and DMAR (DMA Remapping) faults. The modern GPU's memory management completely clashed with the S30's legacy Intel VT-d (IOMMU) implementation.
I resolved this by disabling Intel VT-d in the BIOS and appending intel_iommu=off to the GRUB bootloader parameters. This bypassed the broken firmware tables and allowed the system to boot stably with the proprietary NVIDIA 580.126.09 drivers.
Another significant bottleneck was the PCIe 3.0 bus itself. When running vLLM, the default CUDA Graph capture and torch.compile phases initially took a grueling 16 minutes to complete due to the slow bus speed. While it worked beautifully after that initial warmup, the long delay posed a problem: because I set up the inference engine as an auto-start systemd service at boot, systemd would assume the process had hung and shoot it down before compilation could finish. To resolve this, I bypassed the compilation overhead by starting vLLM with the --enforce-eager flag, ensuring the service starts up reliably without getting restarted by the OS.
Performance: 40-70 Tokens per Second
Despite the older host system, the 5060 Ti excels at small, localized tasks.
I settled on the Qwen2.5-Coder-7B-Instruct-FP8-Dynamic model. Because it leverages FP8 precision, the model weights consume roughly 8.5GB of the available 15.48 GiB VRAM. This leaves plenty of overhead for the KV cache and a 16k context window without spilling over into system RAM (which, across a PCIe 3.0 bus, would throttle performance down to single-digit tokens per second).
With vLLM 0.19.1 managing the inference (I initially started with Ollama but switched to vLLM—I don't have comparison numbers yet, but that is a topic for a future post), the S30 consistently pushes 40 to 70 tokens/sec for generation, and handles prompt processing at over 700 tokens/sec.
Here is a quick snapshot from the vLLM logs confirming these real-world speeds during a typical code completion task:
(APIServer pid=967222) INFO 04-22 19:09:50 [loggers.py:259] Engine 000:
Avg prompt throughput: 709.3 tokens/s, Avg generation throughput: 4.3 tokens/s,
Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 9.9%,
Prefix cache hit rate: 0.4%
(APIServer pid=967222) INFO: 127.0.0.1:60326 - "POST /v1/chat/completions
HTTP/1.1" 200 OK
(APIServer pid=967222) INFO 04-22 19:10:00 [loggers.py:259] Engine 000:
Avg prompt throughput: 52.4 tokens/s, Avg generation throughput: 41.0 tokens/s,
Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%,
Prefix cache hit rate: 0.6%
The inference service now runs automatically via systemd. To securely access the engine from my laptop at home, I rely entirely on an SSH port forward. This elegant solution means that dealing with network-level security or opening firewall ports isn't even a configuration requirement; SSH handles the secure tunnel perfectly. On the client side, this local endpoint works beautifully with VS Code and the Continue extension, providing a seamless and entirely private AI coding experience.
Power, Thermals, and Footprint
One of the best surprises of this Blackwell upgrade is how remarkably quiet and power-efficient the card is. It idles gracefully at 10W and rarely pushes past 20W during my typical small localized tasks. Because it runs so cool (sitting around 38°C with the fans completely off at 0%), stuffing the entire server rig into a cupboard does not trigger any thermal anxiety whatsoever.
For reference, here is the current footprint while loaded:
With the hardware foundation finally stabilized, incredibly power-efficient, and pushing excellent tokens per second, the runway is completely clear to deploy IronClaw and build out the actual home agent capabilities.
Looking Ahead
I am absolutely thrilled to have reached this stage! Pressing my trusty old ThinkStation into service was a gamble that paid off beautifully, saving a cool $2,000 to $3,000 that would have otherwise gone into an entirely new workstation on top of the GPU cost. Better yet, this setup gives me massive flexibility to stagger future upgrades. If I ever need more VRAM, I can simply drop in a second Blackwell card for dual mode—provided I finally upgrade the host machine to support PCIe 5.0 so the interconnect isn't choked by legacy bus speeds.
For now, I am eagerly looking forward to what's next. Having a secure, lightning-fast, and entirely local AI bedrock opens up incredible possibilities for the home network. I'll be diving deep into the IronClaw orchestration very soon, and you can expect more blog posts detailing that agentic journey in the coming weeks!
