Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents…
Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents following months of conversation, legal assistants reasoning through gigabytes of case law as big as an entire encyclopedia set, or coding copilots navigating sprawling repositories, preserving long-range context is essential for relevance and…
NVIDIA cuQuantum is an SDK of optimized libraries and tools that accelerate quantum computing emulations at both the circuit and device level by orders of…
NVIDIA cuQuantum is an SDK of optimized libraries and tools that accelerate quantum computing emulations at both the circuit and device level by orders of magnitude. With NVIDIA Tensor Core GPUs, developers can speed up quantum computer simulations based on quantum dynamics, state vectors, and tensor network methods by orders of magnitude. In many cases, this provides researchers with simulations…
As AI evolves to planning, research, and reasoning with agentic AI, workflows are becoming increasingly complex. To deploy agentic AI applications efficiently,…
As AI evolves to planning, research, and reasoning with agentic AI, workflows are becoming increasingly complex. To deploy agentic AI applications efficiently, AI clouds need a software-defined, hardware-accelerated application delivery controller (ADC). That enables dynamic load balancing, robust security, cloud-native multi-tenancy, and rich observability. F5 BIG-IP ADC for Kubernetes…
This is the third post in the large language model latency-throughput benchmarking series, which aims to instruct developers on how to benchmark LLM inference…
This is the third post in the large language model latency-throughput benchmarking series, which aims to instruct developers on how to benchmark LLM inference with TensorRT-LLM. See LLM Inference Benchmarking: Fundamental Concepts for background knowledge on common metrics for benchmarking and parameters. And read LLM Inference Benchmarking Guide: NVIDIA GenAI-Perf and NIM for tips on using GenAI…
RAPIDS, a suite of NVIDIA CUDA-X libraries for Python data science, released version 25.06, introducing exciting new features. These include a Polars GPU…
RAPIDS, a suite of NVIDIA CUDA-X libraries for Python data science, released version 25.06, introducing exciting new features. These include a Polars GPU streaming engine, a unified API for graph neural networks (GNNs), and acceleration for support vector machines with zero code changes required. In this blog post, we’ll explore a few of these updates. In September 2024…
AI agents powered by large language models are transforming enterprise workflows, but high inference costs and latency can limit their scalability and user…
AI agents powered by large language models are transforming enterprise workflows, but high inference costs and latency can limit their scalability and user experience. To address this, NVIDIA recently announced the NVIDIA AI Blueprint for Building Data Flywheels. It’s an enterprise-ready workflow that helps optimize AI agents by automated experimentation to find efficient models that reduce…
The forecast this month is showing a 100% chance of epic gaming. Catch the scorching lineup of 20 titles coming to the cloud, which gamers can play whether indoors or on the go. Six new games are landing on GeForce NOW this week, including launch day titles Figment and Little Nightmares II. And to make
Read Article
As accelerated computing continues to drive application performance in all areas of AI and scientific computing, there’s a renewed interest in GPU optimization…
As accelerated computing continues to drive application performance in all areas of AI and scientific computing, there’s a renewed interest in GPU optimization techniques to ensure applications obtain the best possible performance. As an application developer, there are many ways to program GPUs, up and down the software stack. In this post, we introduce some of the different levels of the stack…
As part of continued efforts to ensure NVIDIA Omniverse is a developer-first platform, NVIDIA will be deprecating the Omniverse Launcher on Oct. 1. Doing so…
As part of continued efforts to ensure NVIDIA Omniverse is a developer-first platform, NVIDIA will be deprecating the Omniverse Launcher on Oct. 1. Doing so will enable a more open, integrated, and efficient development experience. Removing the Launcher will streamline how developers access essential tools and resources on the platforms they already use and trust.