A new technical paper titled “Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System” was published by researchers at Rensselaer Polytechnic Institute and IBM. Abstract “Large Language Model (LLM) inference is increasingly constrained by memory bandwidth, with frequent access to the key-value (KV) cache dominating data movement. While attention sparsity reduces some... » read more The post Dynamic KV Cache Scheduling in Heterogeneous Memory Systems for...| Semiconductor Engineering
A new technical paper titled “Power Stabilization for AI Training Datacenters” was published by researchers at Microsoft, OpenAI, and NVIDIA. Abstract “Large Artificial Intelligence (AI) training workloads spanning several tens of thousands of GPUs present unique power management challenges. These arise due to the high variability in power consumption during the training. Given the synchronous... » read more The post Power Stabilization To Allow Continued Scaling Of AI Training Workloa...| Semiconductor Engineering