For local LLM enthusiasts, VRAM has always been the main constraint when choosing hardware. Now, a new option is becoming more accessible at a price point that’s hard to ignore. The Huawei Atlas 300I Duo, an AI inference card from China, is showing up on platforms like Alibaba for under $1500, offering an impressive 96 […]| Hardware Corner
The latest rumors around AMD’s upcoming RDNA5 flagship, codenamed AT0, suggest a 512-bit memory bus paired with GDDR7. For anyone running large quantized LLMs locally, this is the part of the leak worth paying attention to – not the shader counts or gaming benchmarks. If the leak is accurate, bandwidth and VRAM capacity could finally […]| Hardware Corner
NVIDIA’s Jet-Nemotron claims a 45x VRAM reduction for local LLMs. Here’s what that really means for speed, context length, and consumer GPUs.| Hardware Corner
Moore’s Law Is Dead has leaked new details on AMD’s upcoming Medusa Halo APU, the direct successor to Strix Halo. For enthusiasts focused on running large language models locally, this is an important development, as Medusa Halo addresses the biggest bottleneck of its predecessor: memory bandwidth. From Strix Halo to Medusa Halo Strix Halo (Ryzen […]| Hardware Corner
We recently discussed the upcoming single-GPU Intel Arc Pro B60 with 24GB of VRAM and its potential to shake up the local LLM hardware market. Now, its bigger brother is set to arrive. Reports indicate the MaxSun Arc Pro B60 Dual, featuring two GPUs on a single board for a total of 48GB of VRAM, […]| Hardware Corner
New pricing information for NVIDIA’s upcoming RTX 50 SUPER series has surfaced, suggesting a significant shift in the value proposition for local large language model (LLM) enthusiasts. According to the leak, the new SUPER cards will launch at the same Manufacturer’s Suggested Retail Price (MSRP) as their non-SUPER predecessors. For anyone building a system for […]| Hardware Corner
The stream of mini-PCs built around AMD’s Ryzen AI 300 “Strix Halo” platform continues, this time with a new model named the X+ RIVAL. While the market is quickly becoming crowded with similar…| Hardware Corner
In a significant development for the AI community, the Qwen team has announced the release of its most powerful open agentic code model to date, the Qwen3-Coder-480B-A35B-Instruct.| Hardware Corner
The SIXUNITED STHT1 Mini-ITX motherboard brings AMD’s Strix Halo APU and 128GB of LPDDR5X memory to DIY LLM builders| Hardware Corner
The landscape for high-density, on-premise AI hardware is rapidly evolving, driven almost single-handedly by the arrival of AMD’s Ryzen AI 300 “Strix Halo” series. For the enthusiast dedicated to…| Hardware Corner
The arrival of AMD’s Ryzen AI MAX+ 395 “Strix Halo” APU has generated considerable interest among local LLM enthusiasts, promising a potent combination of CPU and integrated graphics performance with…| Hardware Corner
Zotac unveils plans for the Magnus EA mini-PC with AMD Strix Halo APU, aiming to bring powerful local LLM inference to compact, GPU-free systems.| Hardware Corner
Beelink has unveiled the GTR9 Pro AI Mini, a compact LLM-ready PC powered by the Ryzen AI MAX+ 395 APU with up to 128GB RAM and 110GB usable VRAM—designed for local LLM inference in a small form factor.| Hardware Corner
Chinese manufacturer FAVM has announced FX-EX9, a compact 2-liter Mini-PC powered by AMD’s Ryzen AI MAX+ 395 “Strix Halo” processor, potentially offering new options for enthusiasts running quantized…| Hardware Corner
GMKtec has officially priced its EVO-X2 SFF/Mini-PC at ~$2,000, positioning it as a potential option for AI enthusiasts looking to run large language models (LLMs) at home.| Hardware Corner
While NVIDIA’s newly announced RTX Pro 6000 offers a straightforward 96GB VRAM solution, , a new wave of modified RTX 4090 from China – offering 48GB per card – has emerged as a potential alternative.| Hardware Corner