Categories
Misc

NVIDIA Omniverse Machinima Releasing in Open Beta

Technical artists, developers and content creators can now take 3D storytelling to the next level: NVIDIA Omniverse Machinima is available in an open beta.

Technical artists, developers and content creators can now take 3D storytelling to the next level: NVIDIA Omniverse Machinima will be made available in open beta at the end of GTC.

Omniverse Machinima offers a suite of tools and extensions that enable users to render realistic graphics and animation using scenes and characters from games.

The app includes premade assets from NVIDIA and games such as Squad from Offworld Industries, and Mount & Blade Warband by TaleWorlds Entertainment, with more to come. 

Through Omniverse Machinima, users can:

  • Render scenes with materials, surfaces and textures from the NVIDIA MDL library or imported from third-party asset libraries.
  • Animate character’s faces using a simple voice recording through NVIDIA Audio2Face technology.
  • Create realistic visuals with physically accurate materials through NVIDIA PhysX 5, Blast and Flow extensions.
  • Capture human motion through a video feed using wrnch’s AI pose estimation technology
  • And leverage the built-in Omniverse RTX Renderer to produce an output with the highest fidelity.

NVIDIA and wrnch Inc., the leading provider of computer vision software, are collaborating to deliver AI-powered human pose estimation capabilities in Omniverse Machinima. The extension created by wrnch Inc. includes:

  • wrnch CaptureStream, a free downloadable tool that enables creators to use a mobile device’s camera to capture the human motion that they’d like to reproduce in an application.
  • wrnch AI Pose Estimator, an Omniverse extension that enables creators to detect and connect to the wrnch CaptureStream application running on the local network.

Omniverse users can leverage the wrnch Engine, which extracts human motion from video feeds and uses pose estimation algorithms to track skeletal joints and mimic the movements on the 3D character. 

Learn more about NVIDIA Omniverse Machinima and download the open beta today.

Categories
Misc

NVIDIA Omniverse Audio2Face Available Later This Week in Open Beta

NVIDIA Omniverse Audio2Face will be available later this week in open beta. With the Audio2Face app, Omniverse users can generate AI-driven facial animation from audio sources.

NVIDIA Omniverse Audio2Face will be available later this week in open beta. With the Audio2Face app, Omniverse users can generate AI-driven facial animation from audio sources.

The demand for digital humans is increasing across industries, from game development and visual effects to conversational AI and healthcare. But the animation process is tedious, manual and complex, plus existing tools and technologies can be difficult to use or implement into existing workflows. 

With Omniverse Audio2Face, anyone can now create realistic facial expressions and motions to match any voice-over track. The technology feeds the audio input into a pre-trained Deep Neural Network, based on NVIDIA and the output of the network drives the facial animation of 3D characters in real time.

Video 1. NVIDIA Omniverse Audio2Face – Multi-Instance Character Animation.

The open beta release includes:

  • Audio player and recorder: record and playback vocal audio tracks, then input the file to the neural network for immediate animation results.
  • Live mode: use a microphone to drive Audio2Face in real time.
  • Character transfer: retarget generated motions to any 3D character’s face, whether realistic or stylized.
  • Multiple instances: run multiple instances of Audio2Face with multiple characters in the same scene.

Learn more about NVIDIA Omniverse Audio2Face and join the open beta today.

Categories
Misc

ICYMI: New AI Tools and Technologies Announced at GTC 2021 Keynote

At GTC 2021, NVIDIA announced new software tools to help developers build optimized conversational AI, recommender, and video solutions.

At GTC 2021, NVIDIA announced new software tools to help developers build optimized conversational AI, recommender, and video solutions. Watch the keynote from CEO, Jensen Huang, for insights on all of the latest GPU technologies.

Announcing Availability of NVIDIA Jarvis

Today NVIDIA announced major conversational AI capabilities in NVIDIA Jarvis that will help enterprises build engaging and accurate applications for their customers. These include highly accurate automatic speech recognition, real-time translation for multiple languages and text-to-speech capabilities to create expressive conversational AI agents.

Highlights include:

  • Out-Of-The-Box speech recognition model trained on multiple large corpus with greater than 90% accuracy
  • Transfer Learning Toolkit in TAO to finetune models on any domain
  • Real-time translation for 5 languages that run under 100ms latency per sentence
  • Expressive Text-To-Speech that delivers 30x higher throughput compared with Tacotron2

The new capabilities are planned for release in Q2 2021as part of the NVIDIA Jarvis open beta program.

Resources:

 > NVIDIA Jarvis Developer Blogs – includes introduction to Jarvis and tutorials for building conversational AI apps.

Add this GTC session to your calendar to learn more:

 > Building and Deploying a Custom Conversational AI App with NVIDIA Transfer Learning Toolkit and Jarvis


Announcing NVIDIA TAO Framework – Early Access

Today NVIDIA announced NVIDIA Train, Adapt, and Optimize (TAO), a GUI-based, workflow-driven framework that simplifies and accelerates the creation of enterprise AI applications and services. By fine-tuning pretrained models, enterprises can produce domain specific models in hours rather than months, eliminating the need for large training runs and deep AI expertise.  

NVIDIA TAO simplifies the time-consuming parts of a deep learning workflow, from data preparation to training to optimization, shortening the time to value. 

Highlights include:

  • Access a diverse set of pre-trained models including speech, vision, natural language understanding and more
  • Speedup your AI development by over 10X with NVIDIA pre-trained models and TLT
  • Increase model performance with federated learning while preserving data privacy​
  • Optimize models for high-throughput, low-latency inference with NVIDIA TensorRT
  • Optimal configuration deployment for any model architecture on a CPU or GPU with NVIDIA Triton Inference Server 
  • Seamlessly deploy and orchestrate AI applications with NVIDIA Fleet Command

Apply for early access to NVIDIA TAO here


Announcing NVIDIA Maxine – Available for Download Now

Today NVIDIA announced availability for NVIDIA Maxine SDKs, which are used by developers to build innovative virtual collaboration and content creation applications such as video conferencing and live streaming. Maxine’s state-of-the-art AI technologies are highly optimized and deliver the highest performance possible on GPUs, both on PCs and in data centers.

Highlights from this release include:

  • Video Effects SDK: super resolution, video noise removal, virtual background
  • Augmented Reality SDK: 3D effects such as face tracking and body pose estimation
  • Audio Effects SDK: high quality noise removal and room echo removal

In addition, we announced AI Face Codec, a novel AI-based method from NVIDIA research to compress videos and render human faces for video conferencing. It can deliver up to 10x reduction in bandwidth vs H.264.

Developers building Maxine-based apps can use Jarvis for real time transcription, translation and virtual assistant capabilities.

Get started with Maxine here.

Resources:

  > Reinvent Video Conferencing, Content Creation & Streaming with AI Using NVIDIA Maxine

Add these GTC sessions to your calendar to learn more:

 > NVIDIA Maxine: An Accelerated Platform SDK for Developers of Video Conferencing Services

 > How to Process Live Video Streams on Cloud GPUs Using NVIDIA Maxine SDK

 > Real-time AI for Video-Conferencing with Maxine


Announcing NVIDIA Triton Inference Server 2.9

Today NVIDIA announced the latest version of the Triton Inference Server. Triton is an open source inference serving software that maximizes performance and simplifies production deployment at scale. 

Highlights from this release include:

  • Model Navigator, a new tool in Triton (alpha), automatically converts TensorFlow and PyTorch models to TensorRT plan, validates accuracy, and sets up a deployment environment.
  • Model Analyzer now automatically determines optimal batch size and number of concurrent model instances to maximize performance, based on latency or throughput targets.
  • Support for OpenVINO backend (beta) for high performance inferencing on CPU, Windows Triton build (alpha), and integration with MLOps platforms: Seldon and Allegro

Download Triton from NGC here.  Access code and documentation at GitHub.

Add this GTC session to your calendar to learn more:

 > Easily Deploy AI Deep Learning Models at Scale with Triton Inference Server


Announcing TensorRT 8.0

Today NVIDIA announced TensorRT 8.0, the latest version of its high-performance deep learning inference SDK. TensorRT includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference optimizations. With the new features and optimizations, inference applications can now run up to 2x faster with INT8 precision, with accuracy similar to FP32. 

Highlights from this release include:

  • Quantization Aware Training to experience FP32 accuracy with INT8 precision 
  • Support for Sparsity on Ampere GPUs delivers up to 50% higher  throughput on Ampere GPUs
  • Upto 2x faster inference for transformer based networks like BERT with new compiler optimizations 

TensorRT 8 will be available in Q2, 2021 from the TensorRT page. The latest version of samples, parsers and notebooks are always available in the TensorRT open source repo.

Add these GTC sessions to your calendar to learn more:

 > Accelerate Deep Learning Inference with TensorRT 8.0

 > Quantization Aware Training in PyTorch with TensorRT 8.0


Announcing NVIDIA Merlin End-to-End Accelerated Recommender System

Today NVIDIA announced the latest release of NVIDIA Merlin, an open beta application framework that enables the end-to-end development of deep learning recommender systems, from data preprocessing to model training and inference, all accelerated on NVIDIA GPUs. With this release, Merlin delivers a new API and inference support that streamlines the recommender workflow. 

Highlights from this release include:

  • New Merlin API makes it easier to define workflows and training pipelines
  • Deepened support for inference and integration with Triton Inference Server
  • Scales transparently to larger datasets and more complex models 

Resources:

Add these GTC sessions to your calendar to learn more: 

 > End-2-end Deployment of GPU Accelerated Recommender Systems: From ETL to Training to Inference (Training Session)

 > Accelerated ETL, Training and Inference of Recommender Systems on the GPU with Merlin, HugeCTR, NVTabular, and Triton


Announcing Data Labeling & Annotation Partner Services For Transfer Learning Toolkit

Today NVIDIA announced that it is working with six leading NVIDIA partners to provide solutions for data labeling, making it easy to adapt pre-trained models to specific domain data and train quickly and efficiently. These companies are AI Reverie, Appen, Hasty,ai, Labelbox, Sama, and Sky Engine.

Training reliable AI and machine learning models requires vast amounts of accurately labeled data and acquiring labeled and annotated data at scale is a challenge for several enterprises. Using these integrations, developers can use the partner services and platforms with NVIDIA Transfer Learning Toolkit (TLT) to either perform annotation, utilize partners’ synthetic data with TLT, or use external annotation tools and then import data to TLT for training and model optimization. 

To learn more about the integration, read the developer blog: 

 > Integrating with Data Generation and Labelling Tools for Accurate AI Training

Download Transfer Learning Toolkit and get started here

Add these GTC sessions to your calendar to learn more:

 > Train Smarter not Harder with NVIDIA Pre-trained models and Transfer Learning Toolkit 3.0

 > Connect with the Experts: Transfer Learning Toolkit and DeepStream SDK for Vision AI/Intelligent Video Analytics


Announcing DeepStream 6.0 

NVIDIA DeepStream SDK is the AI streaming analytics toolkit for building high performance, low-latency, complex video analytics apps and services. Today NVIDIA announced DeepStream 6.0. This latest version brings a new Graphical User Interface to help developers build reliable AI applications faster, and fast track the entire workflow from prototyping to deployment across the edge and cloud. With the new GUI and a suite of productivity tools you can build AI apps in days versus weeks.

Sign up to be notified for the early access program here.

Add these GTC sessions to your calendar to learn more: 

 > Bringing Scale and Optimization to Video Analytics Pipelines with NVIDIA Deepstream SDK

 > Connect with the Experts: Transfer Learning Toolkit and DeepStream SDK for Vision AI/Intelligent Video Analytics

 > Full list of intelligent video analytics talk at GTC

Register for GTC this week for more on the latest GPU-accelerated AI technologies.

Categories
Misc

Announcing Megatron for Training Trillion Parameter Models & NVIDIA Jarvis Availability

NVIDIA announced several major breakthroughs in conversational AI that will bring in a new wave of conversational AI applications.

Conversational AI is opening new ways for enterprises to interact with customers in every industry using applications like real-time transcription, translation, chatbots and virtual assistants. Building domain-specific interactive applications requires state-of-the-art models, optimizations for real time performance, and tools to adapt those models with your data. This week at GTC, NVIDIA announced several major breakthroughs in conversational AI that will bring in a new wave of conversational AI applications.

MEGATRON

NVIDIA Megatron is a PyTorch-based framework for training giant language models based on the transformer architecture. Larger language models are helping produce superhuman-like responses and are being used in applications such as email phrase completion, document summarization and live sports commentary. The Megatron framework has also been harnessed by the University of Florida to develop GatorTron, the world’s largest clinical language model.

Highlights include:

  • Linearly scale training up to 1 trillion parameters on DGX SuperPOD with advanced optimizations and parallelization algorithms. 
  • Built on cuBLAS, NCCL, NVLINK and InfiniBand to train a language model on multi-GPU, multi-node systems
  • Improvement in throughput by more than 100x when moving from 1 billion parameter model on 32 A100 GPUs to 1T parameter on 3072 A100 GPUs
  • Achieve sustained 50% utilization of Tensor Cores.

Read the technical blog post for more details.
Megatron is available on GitHub.

JARVIS

NVIDIA also announced new achievements for Jarvis, a fully accelerated conversational AI framework, including highly accurate automatic speech recognition, real-time translation for multiple languages and text-to-speech capabilities to create expressive conversational AI agents.

Highlights include:

  • Out-of-the-box speech recognition model trained on multiple large corpus with greater than 90% accuracy
  • Transfer Learning Toolkit in TAO to finetune models on any domain
  • Real-time translation for 5 languages that run under 100ms latency per sentence
  • Expressive text-to-speech that delivers 30x higher throughput compared with Tacotron2

These new capabilities will be available in Q2 2021 as part of the ongoing beta program.

Jarvis beta currently includes state-of-the-art models pre-trained for thousands of hours on NVIDIA DGX; Transfer Learning Toolkit for adapting those models to your domain with zero coding; Optimized end-to-end speech, vision, and language pipelines that run in real-time.

To get started with Jarvis, read this introductory blog on building and deploying custom conversational AI models using Jarvis and NVIDIA Transfer Learning Toolkit. Read the technical blog post >

Next, try these sample applications for ideas on what you can build with Jarvis out-of-the-box:

  1. Jarvis Rasa assistant: End-to-end voice enabled AI assistant demonstrating integration of Jarvis Speech and Rasa
  2. Jarvis Contact App: Peer-to-peer video chat with streaming transcription and named entity recognition
  3. Question Answering: Build a QA system with a few lines of Python code using read-to-use Jarvis NLP service 

Join us at NVIDIA GTC for free on April 13th for our session “Building and Deploying a Custom Conversational AI App with NVIDIA Transfer Learning Toolkit and Jarvis” to learn more.

Categories
Misc

NVIDIA Announces CPU for Giant AI and High Performance Computing Workloads

‘Grace’ CPU delivers 10x performance leap for systems training giant AI models, using energy-efficient Arm coresSwiss Supercomputing Center and US Department of Energy’s Los Alamos National …

Categories
Misc

NVIDIA and Partners Collaborate on Arm Computing for Cloud, HPC, Edge, PC

NVIDIA GPU + AWS Graviton2-Based Amazon EC2 Instances, HPC Developer Kit with Ampere Computing CPU and Dual GPUs, More Initiatives Help Expand Opportunities for Arm-Based SolutionsSANTA CLARA, …

Categories
Misc

Swiss National Supercomputing Centre, Hewlett Packard Enterprise and NVIDIA Announce World’s Most Powerful AI-Capable Supercomputer

‘Alps’ system to advance research across climate, physics, life sciences with 7x more powerful AI capabilities than current world-leading system for AI on MLPerfLUGANO, Switzerland, April 12, …

Categories
Misc

NVIDIA’s New CPU to ‘Grace’ World’s Most Powerful AI-Capable Supercomputer

NVIDIA’s new Grace CPU will power the world’s most powerful AI-capable supercomputer. The Swiss National Computing Center’s (CSCS) new system will use Grace, a revolutionary Arm-based data center CPU introduced by NVIDIA today, to enable breakthrough research in a wide range of fields. From climate and weather to materials sciences, astrophysics, computational fluid dynamics, life Read article >

The post NVIDIA’s New CPU to ‘Grace’ World’s Most Powerful AI-Capable Supercomputer appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA and Global Computer Makers Launch Industry-Standard Enterprise Server Platforms for AI

NVIDIA-Certified Servers with NVIDIA AI Enterprise Software Running on VMware vSphere Simplify and Accelerate Adoption of AISANTA CLARA, Calif., April 12, 2021 (GLOBE NEWSWIRE) — NVIDIA today …

Categories
Misc

NVIDIA AI-on-5G Computing Platform Adopted by Leading Service and Network Infrastructure Providers

Fujitsu, Google Cloud, Mavenir, Radisys and Wind River to Deliver Solutions for Smart Hospitals, Factories, Warehouses and StoresSANTA CLARA, Calif., April 12, 2021 (GLOBE NEWSWIRE) — GTC — …