Categories
Misc

Experience the Ease of AI Model Creation with the TAO Toolkit on LaunchPad

The TAO Toolkit lab on LaunchPad has everything you need to experience the end-to-end process of fine-tuning and deploying an object detection application.

Building AI Models from scratch is incredibly difficult, requiring mountains of data and an army of data scientists. With the NVIDIA TAO Toolkit, you can use the power of transfer learning to fine-tune NVIDIA pretrained models with your own data and optimize for inference—without AI expertise or large training datasets.

You can now experience the TAO Toolkit through NVIDIA LaunchPad, a free program that provides short-term access to a large catalog of hands-on labs. 

LaunchPad helps developers, designers, and IT professionals speed up the creation and deployment of modern, data-intensive applications. LaunchPad is the best way to enjoy and experience the transformative power of the NVIDIA hardware and software stack working in unison to power your AI applications.  

TAO Toolkit on LaunchPad 

The TAO Toolkit lab on LaunchPad has everything you need to experience the end-to-end process of fine-tuning and deploying an object detection application. 

Object detection is a popular computer vision task that involves classifying and putting bounding boxes around images or frames of videos. It can be used for real-world applications in retail (self check-out, for example), transportation, manufacturing, and more. 

With the TAO Toolkit, you can also: 

  • Achieve up to 4x in inference speed-up with built-in model optimization 
  • Generalize your model with offline and online data augmentation
  • Scale up and out with multi-GPU and multi-node to speed-up your model training 
  • Visualize and understand model training performance in TensorBoard

The TAO Toolkit lab is preconfigured with the datasets, GPU-optimized pretrained models, Jupyter notebooks, and the necessary SDKs for you to seamlessly accomplish your task. 

Ready to get started? Apply now to access the free lab.  

Learn more about the TAO Toolkit.

Categories
Misc

Pony.ai Express: New Autonomous Trucking Collaboration Powered by NVIDIA DRIVE Orin

More than 160 years after the legendary Pony Express delivery service completed its first route, a new generation of “Pony”-emblazoned vehicles are taking an AI-powered approach to long-haul delivery. Autonomous driving company Pony.ai announced today a partnership with SANY Heavy Truck (SANY), China’s largest heavy equipment manufacturer, to jointly develop level 4 autonomous trucks. The Read article >

The post Pony.ai Express: New Autonomous Trucking Collaboration Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Categories
Misc

Enabling Enterprise Cybersecurity Protection with a DPU-Accelerated, Next-Generation Firewall

Palo Alto Networks and NVIDIA have developed an Intelligent Traffic Offload (ITO) solution to solve the scaling, efficiency, and economic challenges this creates.

Cyberattacks are gaining sophistication and are presenting an ever-growing challenge. This challenge is compounded by an increase in remote workforce connections driving growth in secure tunneled traffic at the edge and core, the expansion of traffic encryption mandates for the federal government and healthcare networks, and an increase in video traffic.

In addition, an increase in mobile and IoT traffic is being generated by the introduction of 5G speeds and the addition of billions of connected devices.

These trends are creating new security challenges that require a new direction in cybersecurity to maintain adequate protection. IT Departments—and firewalls—must inspect exponentially more data and take deeper looks inside traffic flows to address new threats. They must be able to check traffic between virtual machines and containers that run on the same host, traffic that traditional firewall appliances cannot see.

Operators must deploy enough firewalls capable of handling the total traffic throughput, but doing so without sacrificing performance can be extremely cost-prohibitive. This is because general-purpose processors (server CPUs) are not optimized for packet inspection and cannot handle the higher network speeds. This results in suboptimal performance, poor scalability, and increased consumption of expensive CPU cores.

Security applications such as next-generation firewalls (NGFW) are struggling to keep up with higher traffic loads. While software-defined NGFWs offer the flexibility and agility to place firewalls anywhere in modern data centers, scaling them for performance, efficiency, and economics is challenging for today’s enterprises.

Next-generation firewalls

To address these challenges, NVIDIA partnered with Palo Alto Networks to accelerate their VM-Series Next Generation Firewalls through the NVIDIA BlueField data processing unit (DPU). The DPU accelerates packet filtering and forwarding by offloading traffic from the host processor to dedicated accelerators and ARM cores on the BlueField DPU.

The solution delivers the intrusion prevention and advanced security capabilities of Palo Alto Networks’ virtual NGFWs to every server without sacrificing network performance or consuming the CPU cycles needed for business applications. This hardware-accelerated, software-defined NGFW is a milestone in boosting firewall performance and maximizing data center security coverage and efficiency.

The DPU operates as an intelligent network filter to parse and steer traffic flows based on predefined policies with zero CPU overhead, enabling the NGFW to support close to 100 Gb/s throughput for typical use cases. This is a 5x performance boost versus running the VM-Series firewall on a CPU alone, and up to 150 percent CapEx savings compared to legacy hardware.

Intelligent traffic offload service

The joint Palo Alto Networks-NVIDIA solution creates an intelligent traffic offload (ITO) service that overcomes the challenges of performance, scalability, and efficiency. Integration of the VM-Series NGFWs with the NVIDIA BlueField DPUs turbocharges the NGFW solution to improve cost economics while improving threat detection and mitigation. 

20% of traffic benefits from security inspection while 80% of traffic does not (video, VOIP, etc.).
Figure 1. ITO using the Palo Alto Networks NGFW with the BlueField DPU helps enterprises that are challenged with performance vs. security vs. cost 

In certain customer environments, up to 80% of network traffic doesn’t need to be—or can’t be—inspected by a firewall, such as encrypted traffic or streaming traffic from video, gaming, and conferencing. NVIDIA and Palo Alto Networks’ joint solution addresses this through the ITO service, which examines network traffic to determine whether each session would benefit from deep security inspection. 

ITO optimizes firewall resources by checking all control packets but only checking payload flows that require deep security inspection. Suppose the firewall determines that the session would not benefit from security inspection. In that case, the firewall inspects the initial packets of the flow then ITO instructs the DPU to forward all subsequent packets in that session directly to their destination without sending them through the firewall (Figure 2).

Gain a 5X performance improvement and reduce the number of CPU cores required to support security inspection.
Figure 2. DPU acceleration of NGFW provides unprecedented performance and efficiency gains

By only examining flows that can benefit from security inspection and offloading the rest to the DPU, the overall load on the firewall and the host CPU is reduced, and performance increases without sacrificing security.

ITO empowers enterprises to protect end users with an NGFW that can run on every host in a zero-trust environment, helping expedite their digital transformation while keeping them safe from a myriad of cyber threats.

First NGFW to market

To stay ahead of emerging threats, Palo Alto Networks jointly developed the first virtual NGFW to be accelerated by BlueField DPU. The VM-Series firewall enables application-aware segmentation, prevents malware, detects new threats, and stops data exfiltration, all at higher speeds and with less CPU consumption, by offloading these tasks from the host processor to the BlueField DPU.

The DPU operates as an intelligent network filter to parse, classify and steer traffic flows with zero CPU overhead, enabling the NGFW to support close to 100 Gb/s throughput per server for typical use cases. The recently announced DPU-enabled Palo Alto Networks VM-Series NGFW uses zero-trust network security principles.

The ITO solution was presented at NVIDIA GTC during a joint session with Palo Alto Networks. For more information about the ITO service’s role in delivering a software-defined, hardware-accelerated NGFW that addresses ever-evolving cybersecurity threats for enterprise data centers, see the Accelerating Enterprise Cybersecurity with Software-Defined DPU-Powered Firewall GTC session.

Categories
Misc

Upcoming Webinar: Migrating ROS-based Robot Simulations from Ignition Gazebo to NVIDIA Isaac Sim

Join this webinar on August 4, 2022 to learn about moving from an Ignition Gazebo simulation to Isaac Sim using the Ignition-Omniverse experimental converter.

Categories
Misc

NVIDIA AI Platform Delivers Big Gains for Large Language Models

NVIDIA AI platform makes LLMs accessible. Announcing new parallelism techniques and a hyperparameter tool to speed-up training by 30% on any number of GPUs.

As the size and complexity of large language models (LLMs) continue to grow, NVIDIA is today announcing updates to the NeMo Megatron framework that provide training speed-ups of up to 30%.

These updates–which include two trailblazing techniques and a hyperparameter tool to  optimize and scale training of LLMs on any number of GPUs–offer new capabilities to train and deploy models using the NVIDIA AI platform. 

BLOOM, the world’s largest open-science, open-access multilingual language model, with 176 billion parameters, was recently trained on the NVIDIA AI platform, enabling text generation in 46 languages and 13 programming languages. The NVIDIA AI platform has also powered one of the most powerful transformer language models, with 530 billion parameters, Megatron-Turing NLG model (MT-NLG).

Recent advances in LLMs

LLMs are one of today’s most important advanced technologies, involving up to trillions of parameters that learn from text. Developing them, however, is an expensive, time-consuming process that demands deep technical expertise, distributed infrastructure, and a full-stack approach.

Yet their benefit is enormous in advancing real-time content generation, text summarization, customer service chatbots, and question-answering for conversational AI interfaces. 

To advance LLMs, the AI community is continuing to innovate on tools such as Microsoft DeepSpeed, Colossal-AI, Hugging Face BigScience, and Fairscale–which are powered by the NVIDIA AI platform, involving Megatron-LM, Apex, and other GPU-accelerated libraries.

These new optimizations to the NVIDIA AI platform help solve many of the existing pain points across the entire stack. NVIDIA looks forward to working with the AI community to continue making the power of LLMs accessible to everyone. 

Build LLMs faster

The latest updates to NeMo Megatron offer 30% speed-ups for training GPT-3 models ranging in size from 22 billion to 1 trillion parameters. Training can now be done on 175 billion-parameter models using 1,024 NVIDIA A100 GPUs in just 24 days–reducing time to results by 10 days, or some 250,000 hours of GPU computing, prior to these new releases.

NeMo Megatron is a quick, efficient, and easy-to-use end-to-end containerized framework for collecting data, training large-scale models, evaluating models against industry-standard benchmarks, and for inference with state-of-the-art latency and throughput performance.

It makes LLM training and inference easy and reproducible on a wide range of GPU cluster configurations. Currently, these capabilities are available to early access customers to run on NVIDIA DGX SuperPODs, and NVIDIA DGX Foundry as well as in Microsoft Azure cloud. Support for other cloud platforms will be available soon. 

You can try the features on NVIDIA LaunchPad, a free program that provides short-term access to a catalog of hands-on labs on NVIDIA-accelerated infrastructure. 

Two new techniques to speed-up LLM training

Two new techniques included in the updates that optimize and scale the training of LLMs are sequence parallelism (SP) and selective activation recomputation (SAR).

Sequence parallelism expands tensor-level model parallelism by noticing that the regions of a transformer layer that haven’t previously been parallelized are independent along the sequence dimension. 

Splitting these layers along the sequence dimension enables distribution of the compute and, most importantly, the activation memory for these regions across the tensor parallel devices. Since the activations are distributed, more activations can be saved for the backward pass instead of recomputing them.

Graphic showing that sequence parallelism is used in LayerNorm and Dropout layers, while tensor parallelism is used in attention and FFN layers.
Figure 1. Parallelism modes within a transformer layer.

Selective activation recomputation improves cases where memory constraints force the recomputation of some, but not all, of the activations, by noticing that different activations require different numbers of operations to recompute. 

Instead of checkpointing and recomputing full transformer layers, it’s possible to checkpoint and recompute only parts of each transformer layer that take up a considerable amount of memory but aren’t computationally expensive to recompute. 

For more information, see Reducing Activation Recomputation in Large Transformer Models.

Within the attention layer, the activations for the QKT matrix multiply, softmax, softmax dropout, and attention over V operations are recomputed.
Figure 2. Self-attention block. The red dashed line shows the regions to which selective activation recomputation is applied. 
Sequence parallelism and SAR reduce the memory by ~5x
Figure 3. Amount of activation memory required in backward pass thanks to SP and SAR. As model size increases, both SP and SAR have similar memory savings, reducing the memory required by ~5x.
Graph showing that sequence parallelism and SAR together reduce the computation overhead to just 2% of the baseline.
Figure 4. Amount of computation overhead for full activation recomputation, and SP plus SAR. Bars represent per-layer breakdown of forward, backward, and recompute times. Baseline is the case with no recomputation and no sequence parallelism. These techniques are effective at reducing the overhead incurred when all activations are recomputed instead of saved. For the largest models, overhead drops from 36% to just 2%. 

Accessing the power of LLMs also requires a highly optimized inference strategy. Users can easily use the trained models for inference, and optimize for different use cases using p-tuning and prompt tuning capabilities. 

These capabilities are parameter-efficient alternatives to fine-tuning and allow LLMs to adapt to new use cases without the heavy-handed approach of fine-tuning the full pretrained models. In this technique, the parameters of the original model are not altered. As such, catastrophic ‘forgetting’ issues associated with fine-tuning models are avoided.

For more information, see Adapting P-Tuning to Solve Non-English Downstream Tasks

New hyperparameter tool for training and inference

Finding model configurations for LLMs across distributed infrastructure is a time consuming process. NeMo Megatron introduces a hyperparameter tool to automatically find optimal training and inference configurations, with no code changes required. This enables LLMs to be trained to convergence for inference from day one, eliminating time wasted searching for efficient model configurations.

It uses heuristics and empirical grid search across distinct parameters to find configurations with best throughputs: data parallelism, tensor parallelism, pipeline parallelism, sequence parallelism, micro batch size, and number of activation checkpointing layers (including selective activation recomputation).

Using the hyperparameter tool and NVIDIA testing on containers on NGC, we arrived at the optimal training configuration for a 175B GPT-3 model in under 24 hours (see Figure 5). When compared with a common configuration that uses full activation recomputation, we achieve a 20%-30% throughput speed-up. Using the latest techniques, we achieve an additional 10%-20% speed-up in throughput for models with more than 20B parameters. 

Graph showing 22.06 container with sequence parallelism and selective activation recomputation delivering 30% speed-up compared to the 22.05 containers with full recompute or HP tool capabilities.
Figure 5. Results of the HP tool on several containers indicating speed-up with sequence parallelism and selective activation recomputation, wherein each node is a NVIDIA DGX A100.

The hyperparameter tool also allows finding model configurations that achieve highest throughput or lowest latency during inference. Latency and throughput constraints can be provided to serve the model, and the tool will recommend suitable configurations.

HP tools find optimal model configurations that deliver high throughput, and low latency for inferencing. Several configurations are shown in the graph with varying throughput and latency tradeoffs for GPT-3: 175B, 40B, and 20B parameter models.
Figure 6. HP tool results for inference, showing the throughput per GPU and the latency of different configurations. Optimal configurations include high throughput and low latency.

To explore the latest updates to the NVIDIA AI platform for LLMs, apply for early access to NeMo Megatron. Enterprises can also try NeMo Megatron on NVIDIA LaunchPad, available at no charge.

Acknowledgements

We would like to thank Vijay Korthikanti, Jared Casper, Virginia Adams, and Yu Yao for their contributions to this post.

Categories
Misc

Building a Speech-Enabled AI Virtual Assistant with NVIDIA Riva on Amazon EC2

Figure illustrating a screenshot of an NVIDIA Riva sample virtual assistant application running on a GPU-powered AWS EC2 instance through a web browser.
Learn how to get started with NVIDIA Riva, a fully accelerated speech AI SDK, on AWS EC2 using Jupyter Notebooks and a sample virtual assistant application.

Figure illustrating a screenshot of an NVIDIA Riva sample virtual assistant application running on a GPU-powered AWS EC2 instance through a web browser.

Speech AI can assist human agents in contact centers, power virtual assistants and digital avatars, generate live captioning in video conferencing, and much more. Under the hood, these voice-based technologies orchestrate a network of automatic speech recognition (ASR) and text-to-speech (TTS) pipelines to deliver intelligent, real-time responses.

Building these real-time speech AI applications from scratch is no easy task. From setting up GPU-optimized development environments to deploying speech AI inferences using customized large transformer-based language models in under 300ms, speech AI pipelines require dedicated time, expertise, and investment. 

In this post, we walk through how you can simplify the speech AI development process by using NVIDIA Riva to run GPU-optimized applications. With no prior knowledge or experience, you learn how to quickly configure a GPU-optimized development environment and run NVIDIA Riva ASR and TTS examples using Jupyter notebooks. After following along, this virtual assistant demo could be running on your web browser powered by NVIDIA GPUs on Amazon EC2.

Along with the step-by-step guide, we also provide you with resources to help expand your knowledge so you can go on to build and deploy powerful speech AI applications with NVIDIA support.

But first, here is how the Riva SDK works. 

How does Riva simplify speech AI?

Riva is a GPU-accelerated SDK for building real-time speech AI applications. It helps you quickly build intelligent speech applications, such as AI virtual assistants. 

By using powerful optimizations with NVIDIA TensorRT and NVIDIA Triton, Riva can build and deploy customizable, pretrained, out-of-the-box models that can deliver interactive client responses in less than 300ms, with 7x higher throughput on NVIDIA GPUs compared to CPUs.

The state-of-the-art Riva speech models have been trained for millions of hours on thousands of hours of audio data. When you deploy Riva on your platform, these models are ready for immediate use.

Riva can also be used to develop and deploy speech AI applications on NVIDIA GPUs anywhere: on premises, embedded devices, any public cloud, or the edge.

Here are the steps to follow for getting started with Riva on AWS.

Running Riva ASR and TTS examples to launch a virtual assistant

If AWS is where you develop and deploy workloads, you already have access to all the requirements needed for building speech AI applications. With a broad portfolio of NVIDIA GPU-powered Amazon EC2 instances combined with GPU-optimized software like Riva, you can accelerate every step of the speech AI pipeline.

There are four simple steps to get started with Riva on an NVIDIA GPU-powered Amazon EC2 instance:

  1. Launch an Amazon EC2 instance with NVIDIA GPU-optimized AMI.
  2. Pull the Riva container from the NGC catalog
  3. Run the Riva ASR and TTS Hello World examples with Jupyter notebooks. 
  4. Launch an intelligent virtual assistant application.

To follow along, make sure that you have an AWS account with access to NVIDIA GPU-powered instances (for example, Amazon EC2 G and P instance types such as P4d instances for NVIDIA A100 GPUs and G4dn instances for NVIDIA T4 GPUs).

Step 1: Launch an EC2 instance with the NVIDIA GPU-optimized AMI

In this post, you use the NVIDIA GPU-optimized AMI available on the AWS Marketplace. It is preconfigured with NVIDIA GPU drivers, CUDA, Docker toolkit, runtime, and other dependencies. It also provides a standardized stack for you to build speech AI applications. This AMI is validated and updated quarterly by NVIDIA with the newest drivers, security patches, and support for the latest GPUs to maximize performance.

Choose an instance type

In the AWS Management Console, launch an instance from the AWS Marketplace, using the NVIDIA GPU-Optimized AMI.

Instance types available may vary by region. For more information about choosing an appropriate instance type for your use case, see Choosing the right GPU for deep learning on AWS.

We recommend using NVIDIA A100 GPUs (P4d instances) for best performance at scale but for this guide, an A10G single-GPU instance (g5.xlarge instances) powered by the NVIDIA Ampere Architecture is fine.

For a greater number of pre- or postprocessing steps, consider larger sizes with the same single GPU, more vCPUs, and higher system memory, or consider the P4d instances that take advantage of 8x NVIDIA A100 GPUs.

Configure the instance

To connect to the EC2 instance securely, create a key pair.

  • For Key pair type, select RSA.
  • For Private key file format, select ppk for use with PuTTY, depending on how you plan to connect to the instance.

After the key pair is created, a file is downloaded to your local machine. You need this file in future steps for connecting to the EC2 instance. 

Network settings enable you to control the traffic into and out of your instance. Select Create Security Group and check the rule Allow SSH traffic from: Anywhere. At any point in the future, this can customized based on individual security preferences.

Finally, configure the storage. For this example, 100 GiB on a general purpose SSD should be plenty.

Now you are ready to launch the instance. If successful, your screen should look like Figure 1.

Screenshot of AWS environment after configuring all required settings to launch an NVIDIA GPU-powered EC2 instance.
Figure 1. Success message after launching an instance

Connect to the instance

After a few minutes under Instances on the sidebar, you will see your running instance with a public IPv4 DNS. Keep this address handy as it is used to connect to the instance using SSH. This address will change every time you start and stop your EC2 instance.

There are a number of ways to connect to your EC2 instance. This post uses the PuTTY SSH client to spin up a session from scratch and create the tunneling system into the instance.

You may begin working with your NVIDIA GPU-powered Amazon EC2 instance.

Screenshot of the PuTTY terminal window after a user successfully accesses the NVIDIA GPU-Optimized AMI on an EC2 instance.
Figure 2. Starting screen of the NVIDIA GPU-Optimized AMI on an EC2 instance

Log in with username ubuntu, and make sure that you have the right NVIDIA GPUs running:

nvidia-smi

Step 2: Pull the Riva container from the NGC catalog

To access Riva from your terminal, first create a free NGC account. The NGC catalog is a one-stop-shop for all GPU optimized software, containers, pretrained AI models, SDKs, Helm charts and other helpful AI tools. By signing up, you get access to the complete NVIDIA suite of monthly updated GPU optimized frameworks and training tools so that you can build your AI application in no time. 

After you create an account, generate an NGC API key. Keep your generated API key handy.

Now you can configure the NGC CLI (preinstalled with the NVIDIA GPU-Optimized AMI), by executing the following command:

ngc config set

Enter your NGC API Key from earlier, make sure that the CLI output is ASCII or JSON, and follow the instructions using the Choices section of the command line.

After configuration, on the Riva Skills Quick Start page, copy the download command by choosing Download at the top right side. Run the command in your PuTTY terminal. This initiates the Riva Quick Start resource to download onto your EC2 Linux instance.

Initialize Riva

After the download is completed, you are ready to initialize and start Riva. 

The default settings will prepare all of the underlying pretrained models during the Riva start-up process, which can take up to a couple of hours depending on your Internet speed. However, you can modify the config.sh file within the /quickstart directory with your preferred configuration around which subset of models to retrieve from NGC to speed up this process.

Within this file, you can also adjust storage location and specify which GPU to use, if more than one is installed on your system. This post uses the default configuration settings. The version (vX.Y.Z) number of Riva Quick Start that you downloaded is used to run the following commands (v2.3.0 is the version number used in this post).

cd riva_quickstart_v2.3.0
bash riva_init.sh
bash riva_start.sh

Riva is now running on your virtual machine. To familiarize yourself with Riva, run the Hello World examples next.

Step 3: Run the Riva ASR and TTS Hello World examples 

There are plenty of tutorials available in the /nvidia-riva GitHub repo. The TTS and ASR Python basics notebooks explore how you can use the Riva API.

Before getting started, you must clone the GitHub repo, set up your Python virtual environment, and install Jupyter on your machine by running the following commands in the /riva_quickstart_v2.3.0 directory:

git clone https://github.com/nvidia-riva/tutorials.git

Install and create a Python virtual environment named venv-riva-tutorials.

sudo apt install python3-venv
python3 -m venv venv-riva-tutorials
.venv-riva-tutorials/bin/activate

When the virtual environment has been activated, install the Riva API and Jupyter. Create an IPython kernel in the /riva_quickstart_v2.3.0 directory.

pip3 install riva_api-2.3.0-py3-none-any.whl
pip3 install nvidia-riva-client
pip3 install jupyter
ipython kernel install --user --name=venv-riva-tutorials

To run some simple Hello World examples, open the /tutorials directory and launch the Jupyter notebook with the following commands:

cd tutorials
jupyter notebook --generate-config
jupyter notebook --ip=0.0.0.0 --allow-root

The GPU-powered Jupyter notebook is now running and is accessible through the web. Copy and paste one of the URLs shown on your terminal to start interacting with the GitHub tutorials.

Open the tts-python-basics.ipynb and asr-python-basics.ipynb scripts on your browser and trust the notebook by selecting Not-Trusted at the top-right of your screen. To choose the venv-riva-tutorials kernel, choose Kernel, Change kernel.

You are now ready to work through the notebook to run your first Hello World Riva API calls using out-of-the-box models (Figure 3).

Screenshot of Jupyter Notebook running two scripts titled ‘How do I use Riva ASR APIs with out-of-the-box models?’ and ‘How do I use Riva TTS APIs with out-of-the-box models?’
Figure 3. Example Hello World Riva API notebooks

Explore the other notebooks to take advantage of the more advanced Riva customization features, such as word boosting, updating vocabulary, TAO fine-tuning, and more. You can exit Jupyter by pressing Ctrl+C while on the PuTTY terminal, and exit the virtual environment with the deactivate command.

Step 4: Launch an intelligent virtual assistant

Now that you are familiar with how Riva operates, you can explore how it can be applied with an intelligent virtual assistant found in /nvidia-riva/sample-apps GitHub repo.

To launch this application on your browser, run the following command in the /riva_quickstart_v2.3.0 directory:

git clone https://github.com/nvidia-riva/sample-apps.git

Create a Python virtual environment, and install the necessary dependencies: 

python3 -m venv apps-env
. apps-env/bin/activate
pip3 install riva_api-2.3.0-py3-none-any.whl
pip3 install nvidia-riva-client
cd sample-apps/virtual-assistant
pip3 install -U pip
pip3 install -r requirements.txt

Before you run the demo, you must update the config.py file in the Virtual Assistant directory. Vim is one text editor that you can use to modify the file:

vim config.py 
Screenshot of PuTTY terminal where users can edit the virtual assistant application’s config.py file.
Figure 4. Editing the virtual assistant application’s config.py file

Make sure that the PORT variable in client_config is set to 8888 and the RIVA_SPEECH_API_URL value is set to localhost:50051.

To allow the virtual assistant to access real-time weather data, sign up for the free tier of weatherstack, obtain your API access key, and insert the key value under WEATHERSTACK ACCESS KEY in riva_config.

Now you are ready to deploy the application! 

Deploy the assistant

Run python3 main.py and go to the following URL: https://localhost:8888/rivaWeather. This webpage opens the weather chatbot.

Screenshot of Riva’s sample virtual assistant application.
Figure 5. NVIDIA Riva-powered intelligent virtual assistant

Congratulations! 

You’ve launched an NVIDIA GPU-powered Amazon EC2 instance with the NVIDIA GPU-Optimized AMI, downloaded Riva from NGC, executed basic Riva API commands for ASR and TTS services, and launched an intelligent virtual assistant!

You can stop Riva at any time by executing the following command in the riva_quickstart_v2.3.0 directory:

bash riva_stop.sh.

Resources for exploring speech AI tools

You have access to several resources designed to help you learn how to build and deploy speech AI applications:

  • The /nvidia-riva/tutorials GitHub repo contains beginner to advanced scripts to walk you through ASR and TTS augmentations such as ASR word boosting and adjusting TTS pitch, rate, and pronunciation settings. 
  • To build and customize your speech AI pipeline, you can use the NVIDIA low-code AI model development TAO toolkit and the NeMo application framework for those who like more visibility under the hood for fine-tuning the fully customizable Riva ASR and TTS pipelines. 
  • Finally, to deploy speech AI applications at scale, you can deploy Riva on Amazon EKS and set up auto-scaling features with Kubernetes.

Interested in learning about how customers deploy Riva in production? Minerva CQ, an AI platform for agent assist in contact centers, has deployed Riva on AWS alongside their own natural language and intent models to deliver a unique and elevated customer support experience in the electric mobility market. 

Using NVIDIA Riva to process the automatic speech recognition (ASR) on the Minerva CQ platform has been great. Performance benchmarks are superb, and the SDK is easy to use and highly customizable to our needs.” Cosimo Spera, CEO of Minerva CQ 

Explore other real-world speech AI use cases in Riva customer stories and see how your company can get started with Riva Enterprise.

Categories
Misc

NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class

Selecting the right laptop is a lot like trying to pick the right major. Both can be challenging tasks where choosing wrongly costs countless hours. But pick the right one, and graduation is just around the corner. The tips below can help the next generation of artists select the ideal NVIDIA Studio laptop to maximize performance for the critical workload demands of their unique creative fields — all within budget.

The post NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class appeared first on NVIDIA Blog.

Categories
Misc

Welcome Back, Commander: ‘Command & Conquer Remastered Collection’ Joins GeForce NOW

Take a trip down memory lane this week with an instantly recognizable classic, Command & Conquer Remastered Collection, joining the nearly 20 Electronic Arts games streaming from the GeForce NOW library. Speaking of remastered, GeForce NOW members can enhance their gameplay further with improved resolution scaling in the 2.0.43 app update. When the feature is Read article >

The post Welcome Back, Commander: ‘Command & Conquer Remastered Collection’ Joins GeForce NOW appeared first on NVIDIA Blog.

Categories
Misc

How’s That? Startup Ups Game for Cricket, Football and More With Vision AI

Sports produce a slew of data. In a game of cricket, for example, each play generates millions of video-frame data points for a sports analyst to scrutinize, according to Masoumeh Izadi, managing director of deep-tech startup TVConal. The Singapore-based company uses NVIDIA AI and computer vision to power its sports video analytics platform, which enables Read article >

The post How’s That? Startup Ups Game for Cricket, Football and More With Vision AI appeared first on NVIDIA Blog.

Categories
Misc

Just Released: HPC SDK v22.7 with AWS Graviton3 C7g Support

Four panels vertically laid out each showing a simulation with a black backgroundEnhancements, fixes, and new support for AWS Graviton3 C7g instances, Arm SVE, Rocky Linux OS, OpenMP Tools visibility in Nsight Developer Tools, and more.Four panels vertically laid out each showing a simulation with a black background