Following an announcement by Japan’s Ministry of Economy, Trade and Industry, NVIDIA will play a central role in developing the nation’s generative AI infrastructure as Japan seeks to capitalize on the technology’s economic potential and further develop its workforce. NVIDIA is collaborating with key digital infrastructure providers, including GMO Internet Group, Highreso, KDDI Corporation, RUTILEA,
Read Article
At the recent World Governments Summit in Dubai, NVIDIA CEO Jensen Huang emphasized the importance of sovereign AI, which refers to a nation’s capability to…
At the recent World Governments Summit in Dubai, NVIDIA CEO Jensen Huang emphasized the importance of sovereign AI, which refers to a nation’s capability to develop and deploy AI technologies. Nations have started building regional large language models (LLMs) that codify their culture, history, and intelligence and serve their citizens with the benefits of generative AI.
Neural machine translation (NMT) is an automatic task of translating a sequence of words from one language to another. In recent years, the development of…
Neural machine translation (NMT) is an automatic task of translating a sequence of words from one language to another. In recent years, the development of attention-based transformer models has had a profound impact on complicated language modeling tasks, which predict the next upcoming token in the sentence. NMT is one of the typical instances. There are plenty of open-source NMT models…
In the first post, we walked through the prerequisites for a neural machine translation example from English to Chinese, running the pretrained model with NeMo,…
In the first post, we walked through the prerequisites for a neural machine translation example from English to Chinese, running the pretrained model with NeMo, and evaluating its performance. In this post, we walk you through curating a custom dataset and fine-tuning the model on that dataset. Custom data collection is crucial in model fine-tuning because it enables a model to adapt to…
Described as the largest system in the pharmaceutical industry, BioHive-2 at the Salt Lake City headquarters of Recursion debuts today at No. 35, up more than 100 spots from its predecessor on the latest TOP500 list of the world’s fastest supercomputers. The advance represents the company’s most recent effort to accelerate drug discovery with NVIDIA
Read Article
Driving a fundamental shift in the high-performance computing industry toward AI-powered systems, NVIDIA today announced nine new supercomputers worldwide are using NVIDIA Grace Hopper™ Superchips to speed scientific research and discovery. Combined, the systems deliver 200 exaflops, or 200 quintillion calculations per second, of energy-efficient AI processing power.
NVIDIA today announced that it will accelerate quantum computing efforts at national supercomputing centers around the world with the open-source NVIDIA CUDA-Q™ platform.
Data centers need an upgraded dashboard to guide their journey to greater energy efficiency, one that shows progress running real-world applications. The formula for energy efficiency is simple: work done divided by energy used. Applying it to data centers calls for unpacking some details. Today’s most widely used gauge — power usage effectiveness (PUE) —
Read Article
Generative AI is taking root at national and corporate labs, accelerating high-performance computing for business and science. Researchers at Sandia National Laboratories aim to automatically generate code in Kokkos, a parallel programming language designed for use across many of the world’s largest supercomputers. It’s an ambitious effort. The specialized language, developed by researchers from several
Read Article
Quantum computing. Drug discovery. Fusion energy. Scientific computing and physics-based simulations are poised to make giant steps across domains that benefit humanity as advances in accelerated computing and AI drive the world’s next big breakthroughs. NVIDIA unveiled at GTC in March the NVIDIA Blackwell platform, which promises generative AI on trillion-parameter large language models (LLMs)
Read Article