Categories
Misc

Just Released: cuDSS 0.3.0

cuDSS (Preview) is an accelerated direct sparse solver. It now supports multi-GPU multi-node platforms, and introduces a hybrid memory mode.

cuDSS (Preview) is an accelerated direct sparse solver. It now supports multi-GPU multi-node platforms, and introduces a hybrid memory mode.

Source

Categories
Misc

Decoding How the Generative AI Revolution BeGAN

Generative models have completely transformed the AI landscape — headlined by popular apps such as ChatGPT and Stable Diffusion.

Categories
Misc

Power Advanced Coding Capabilities with Deepseek Code LLM

Deepseek Coder v2, available as an NVIDIA NIM microservice, enhances project-level coding and infilling tasks.

Deepseek Coder v2, available as an NVIDIA NIM microservice, enhances project-level coding and infilling tasks.

Source

Categories
Misc

Addressing Hallucinations in Speech Synthesis LLMs with the NVIDIA NeMo T5-TTS Model

NVIDIA NeMo has released the T5-TTS model, a significant advancement in text-to-speech (TTS) technology. Based on large language models (LLMs), T5-TTS produces…

NVIDIA NeMo has released the T5-TTS model, a significant advancement in text-to-speech (TTS) technology. Based on large language models (LLMs), T5-TTS produces more accurate and natural-sounding speech. By improving alignment between text and audio, T5-TTS eliminates hallucinations such as repeated spoken words and skipped text. Additionally, T5-TTS makes up to 2x fewer word pronunciation errors…

Source

Categories
Misc

Achieving High Mixtral 8x7B Performance with NVIDIA H100 Tensor Core GPUs and TensorRT-LLM

As large language models (LLMs) continue to grow in size and complexity, the performance requirements for serving them quickly and cost-effectively continue to…

As large language models (LLMs) continue to grow in size and complexity, the performance requirements for serving them quickly and cost-effectively continue to grow. To deliver high LLM inference performance, an efficient parallel computing architecture and a flexible and highly-optimized software stack are required. Recently, NVIDIA Hopper GPUs running NVIDIA TensorRT-LLM inference software set…

Source

Categories
Misc

Checkpointing CUDA Applications with CRIU

Checkpoint and restore functionality for CUDA is exposed through a command-line utility called cuda-checkpoint. This utility can be used to transparently…

Source

Categories
Misc

Phi-3-Medium: Now Available on the NVIDIA API Catalog

Illustration representing Phi-3-Medium.Phi-3-Medium accelerates research with logic-rich features in both short (4K) and long (128K) context.Illustration representing Phi-3-Medium.

Phi-3-Medium accelerates research with logic-rich features in both short (4K) and long (128K) context.

Source

Categories
Misc

Advancing Security for Large Language Models with NVIDIA GPUs and Edgeless Systems

An image representing cybersecurity.Edgeless Systems introduced Continuum AI, the first generative AI (GenAI) framework that keeps prompts encrypted at all times with confidential computing by…An image representing cybersecurity.

Edgeless Systems introduced Continuum AI, the first generative AI (GenAI) framework that keeps prompts encrypted at all times with confidential computing by combining confidential VMs with NVIDIA H100 GPUs and secure sandboxing. The launch of this platform underscores a new era in AI deployment, where the benefits of powerful LLMs can be realized without compromising data privacy and…

Source

Categories
Misc

How an NVIDIA Engineer Unplugs to Recharge During Free Days

On a weekday afternoon, Ashwini Ashtankar sat on the bank of the Doodhpathri River, in a valley nestled in the Himalayas. Taking a deep breath, she noticed that there was no city noise, no pollution — and no work emails. Ashtankar, a senior tools development engineer in NVIDIA’s Pune, India, office, took advantage of the
Read Article

Categories
Misc

StarCoder2-15B: A Powerful LLM for Code Generation, Summarization, and Documentation

Trained on 600+ programming languages, StarCoder2-15B is now packaged as a NIM inference microservice available for free from the NVIDIA API catalog.

Trained on 600+ programming languages, StarCoder2-15B is now packaged as a NIM inference microservice available for free from the NVIDIA API catalog.

Source