The cuSPARSE APIs provides GPU-accelerated basic linear algebra subroutines for sparse matrix computations for unstructured sparsity. cuSPARSE is widely used by engineers and scientists working on applications in machine learning, AI, computational fluid dynamics, seismic exploration, and computational sciences.| NVIDIA Developer
Helps drive silicon scaling, reduce costs, and accelerate technology advancements.| NVIDIA Developer
Benefit from automatic regular performance improvements and new GPU architectures.| NVIDIA Developer
Tackle demanding computational requirements of geometry-aware neural networks..| NVIDIA Developer
Solve linear systems with sparse matrices using GPU-accelerated Direct Sparse Solver.| NVIDIA Developer
Note: This is a legacy SDK. Developers may download and continue to use, but it is no longer supported.| NVIDIA Developer
This document describes the methods that can be used by applications to enable and expose an NVIDIA High Performance Graphics Processor for rendering in an Optimus configuration.| NVIDIA Developer
NVIDIA IndeX 3D Scientific Data Visualization| NVIDIA Developer
Discover the key features and benefits of NVIDIA Grace CPU, the first data center CPU developed by NVIDIA. It has been built from the ground up to create the world’s first superchips.| NVIDIA Technical Blog
Port HPC applications to NVIDIA Grace CPU-based platforms.| NVIDIA Developer
Get great performance, stability, and support for NVIDIA Grace.| NVIDIA Developer
The NVIDIA Grace CPU Superchip brings together two high-performance and power-efficient NVIDIA Grace CPUs with server-class LPDDR5X memory connected with NVIDIA NVLink-C2C.| NVIDIA Technical Blog
Schedules complex, multi-stage and multi-container robotics workloads.| NVIDIA Developer
First unveiled at NVIDIA GTC 2025, NVIDIA Cosmos Reason is an open and fully customizable reasoning vision language model (VLM) for physical AI and robotics. The VLM enables robots and vision AI…| NVIDIA Technical Blog
NVIDIA CUDA-Q (formerly NVIDIA CUDA Quantum) is an open-source programming model for building hybrid-quantum classical applications that take full advantage of CPU, GPU, and QPU compute abilities.| NVIDIA Technical Blog
The development of useful quantum computing is a massive global effort, spanning government, enterprise, and academia. The benefits of quantum computing could help solve some of the most challenging…| NVIDIA Technical Blog
Improving sources of sustainable energy is a worldwide problem with environmental and economic security implications. Ying-Yi Hong, distinguished professor of Power Systems and Energy at Chung Yuan…| NVIDIA Technical Blog
Quantum dynamics describes how complex quantum systems evolve in time and interact with their surroundings. Simulating quantum dynamics is extremely difficult yet critical for understanding and…| NVIDIA Technical Blog
The electrical grid is designed to support loads that are relatively steady, such as lighting, household appliances, and industrial machines that operate at constant power. But data centers today…| NVIDIA Technical Blog
Large language models (LLMs) offer incredible new capabilities, expanding the frontier of what is possible with AI. However, their large size and unique execution characteristics can make them…| NVIDIA Technical Blog
One of the most common tasks in CUDA programming is to parallelize a loop using a kernel. As an example, let’s use our old friend SAXPY. Here’s the basic sequential implementation, which uses a for…| NVIDIA Technical Blog
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library for accelerating deep learning primitives with state-of-the-art performance. cuDNN is integrated with popular deep…| NVIDIA Technical Blog
An end-to-end development environment setup solution for DRIVE, Jetson, and more. SDKs.| NVIDIA Developer
Next Step >| NVIDIA Developer
NVIDIA is breaking new ground by integrating silicon photonics directly with its NVIDIA Quantum and NVIDIA Spectrum switch ICs. At GTC 2025, we announced the world’s most advanced Silicon Photonics…| NVIDIA Technical Blog
Build, debug, profile, and develop class-leading and cutting-edge software.| NVIDIA Developer
Profile systems, analyze performance, and optimize platforms.| NVIDIA Developer
Download NVIDIA Nsight Graphics| NVIDIA Developer
To get the most out of AI, optimizations are critical. When developers think about optimizing AI models for inference, model compression techniques—such as quantization, distillation…| NVIDIA Technical Blog
Optimize your use of NVIDIA RTX with these in-depth ray tracing tips.| NVIDIA Technical Blog
NVIDIA announced the release of NVIDIA Dynamo today at GTC 2025. NVIDIA Dynamo is a high-throughput, low-latency open-source inference serving framework for deploying generative AI and reasoning…| NVIDIA Technical Blog
Deep Neural Networks (DNNs) have lead to breakthroughs in a number of areas, including image processing and understanding, language modeling, language translation, speech processing, game playing…| NVIDIA Technical Blog
Mesh shaders are designed to overcome the bottlenecks of the fixed layout used by the classical geometry pipeline.| NVIDIA Technical Blog
Reservoir simulation helps reservoir engineers optimize their resource exploration approach by simulating complex scenarios and comparing with real-world field data. This extends to simulation of…| NVIDIA Technical Blog
The NVIDIA Grace CPU is transforming data center design by offering a new level of power-efficient performance. Built specifically for data center scale, the Grace CPU is designed to handle demanding…| NVIDIA Technical Blog
NVIDIA designed the NVIDIA Grace CPU to be a new kind of high-performance, data center CPU—one built to deliver breakthrough energy efficiency and optimized for performance at data center scale.| NVIDIA Technical Blog
Large language models (LLMs) have enabled AI tools that help you write more code faster, but as we ask these tools to take on more and more complex tasks, there are limitations that become apparent.| NVIDIA Technical Blog
Run AI models on NVIDIA GPUs in the cloud, data center, workstations, and PCs.| NVIDIA Developer
Render game assets with AI, create game characters with photo-realistic visuals, and more.| NVIDIA Developer
In the previous three posts of this CUDA C & C++ series we laid the groundwork for the major thrust of the series: how to optimize CUDA C/C++ code. In this and the following post we begin our…| NVIDIA Technical Blog
Design, simulate, test, and train AI-based robots in a physically-based virtual environment.| NVIDIA Developer
Streamline and expedite development of advanced AI robotics applications.| NVIDIA Developer
Accelerate solutions for dynamic challenges such as assembly tasks and more.| NVIDIA Developer
A Breakthrough in Data Center Performance and Efficiency.| NVIDIA Developer
Everything you want to know about the new H100 GPU.| NVIDIA Technical Blog
What is the interest in trillion-parameter models? We know many of the use cases today and interest is growing due to the promise of an increased capacity for: The benefits are great…| NVIDIA Technical Blog
The NVIDIA A100, V100 and T4 GPUs fundamentally change the economics of the data center, delivering breakthrough performance with dramatically fewer servers, less power consumption, and reduced networking overhead, resulting in total cost savings of 5X-10X.| NVIDIA Developer
Builds end-to-end accelerated AI applications and supports edge AI development.| NVIDIA Developer
Get high performance and accuracy to industrial simulation use cases for different devices.| NVIDIA Developer
With the R515 driver, NVIDIA released a set of Linux GPU kernel modules in May 2022 as open source with dual GPL and MIT licensing. The initial release targeted datacenter compute GPUs…| NVIDIA Technical Blog
GPU-driven rendering has long been a major goal for many game applications. It enables better scalability for handling large virtual scenes and reduces cases where the CPU could bottleneck a game’s…| NVIDIA Technical Blog
Get higher performance with a set of GPU-accelerated libraries, tools, and technologies.| NVIDIA Developer
Stacking transformer layers to create large models results in better accuracies, few-shot learning capabilities, and even near-human emergent abilities on a wide range of language tasks.| NVIDIA Technical Blog
An open-source platform for integrating and programming QPUs, GPUs, and CPUs in one system.| NVIDIA Developer
Profile, optimize, and debug graphics applications such as Direct3D, Vulkan, and more.| NVIDIA Developer
The rise in generative AI adoption has been remarkable. Catalyzed by the launch of OpenAI’s ChatGPT in 2022, the new technology amassed over 100M users within months and drove a surge of development…| NVIDIA Technical Blog
At AWS re:Invent 2023, AWS and NVIDIA announced that AWS will be the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips interconnected with NVIDIA NVLink technology through NVIDIA DGX…| NVIDIA Technical Blog
An interactive profiler for CUDA and NVIDIA OptiX.| NVIDIA Developer
Get the latest feature updates to NVIDIA's proprietary compute stack.| NVIDIA Developer
Build and deploy game characters and interactive avatars at scale.| NVIDIA Developer
Virtual reality displays continue to evolve and now include advanced configurations such as canted HMDs with non-coplanar displays. Other headsets offer ultra-wide fields-of-view as well as other…| NVIDIA Technical Blog
Build and operate real time metaverse tools and apps.| NVIDIA Developer
Explore your GPU compute capability and CUDA-enabled products.| NVIDIA Developer
Get the latest Vulkan 1.3 general release drivers and developer beta drivers.| NVIDIA Developer
The board support package for the Jetson platform.| NVIDIA Developer
Find the related video encoding and decoding support for all NVIDIA GPU products.| NVIDIA Developer
Find resources to build and deploy real-time AI pipelines anywhere.| NVIDIA Developer
Leverage the power of transfer learning to fine-tune NVIDIA pretrained models.| NVIDIA Developer
Provide end-to-end acceleration for AI applications and accelerate your time to market.| NVIDIA Developer
Deep Learning Accelerator (DLA)| NVIDIA Developer
Get access to SDKs, trainings, and connect with developers.| NVIDIA Developer
Encode and decode hardware-accelerated videos on Windows and Linux.| NVIDIA Developer
News and tutorials for developers, scientists, and IT admins| NVIDIA Technical Blog
Accelerate Quantum Circuit Simulation Frameworks.| NVIDIA Developer
Automatic Mixed Precision for Deep Learning | NVIDIA Developer
Accelerate application development for the NVIDIA BlueField DPU.| NVIDIA Developer
Access free tools, extensive learning opportunities, and expert help.| NVIDIA Developer
Simplifies development process and accelerates data science.| NVIDIA Developer
Monitor real-time data and detect threats instantly with AI.| NVIDIA Developer