Hardware Picks

Curated GPU, mini-PC, and storage recommendations for local AI — tested, researched, and honestly reviewed.

Affiliate links use the selfhostailab-20 Amazon tag and equivalent Newegg links. No extra cost to you.

Browse by category below — or jump straight to GPUs, Mini-PCs, or Storage.

hardware

Best Storage for Local AI (2025)

Model files are large. Llama 3 70B at Q4 is ~40 GB. A full ComfyUI setup with several models can hit 200 GB+. Fast NVMe makes model loading instant. Here’s what to buy. Best NVMe — Samsung 990 Pro 2TB Consistently the fastest consumer NVMe. Sequential reads up to 7,450 MB/s — models load in seconds. The 2 TB variant is the sweet spot for a primary AI drive. Samsung 990 Pro 2TB NVMe PCIe 4.

Read more →
hardware

Best Mini-PCs for Local AI (2025)

Not everyone needs a gaming tower. Mini-PCs are ideal for always-on, low-power local AI inference — especially when paired with fast RAM and an external GPU via Thunderbolt/OCuLink. Best Overall — Minisforum EliteMini UM790 Pro AMD Ryzen 9 7940HS with 780M integrated graphics (up to 16 GB shared VRAM from 64 GB RAM). Runs 7B and 13B models at a solid clip — completely silent under load. OCuLink port for eGPU expansion.

Read more →
hardware

Best GPUs for Local AI (2025)

VRAM is king for local AI. The more you have, the larger the model you can load without quantization loss. Here’s what’s worth buying right now. Budget Pick — NVIDIA RTX 4060 Ti (16 GB) 16 GB of VRAM at a sub-$500 price point. Runs 7B and 13B models at full precision, 34B at Q4. Great for getting started without breaking the bank. NVIDIA GeForce RTX 4060 Ti 16GB 16 GB GDDR6 · 165W TDP · PCIe 4.

Read more →