Recently, I am learning and playing around with Deep Reinforcement Learning. Basically, for many DRL algorithms, we need to train a single batch with 1 epoch at a time. I observed that TensorFlow 2 performs significantly slower (9 – 22 times slower) than PyTorch.
It is the first time I met this problem. I used to do more supervised computer vision tasks, therefore, I suspect that the performance issue is caused by a small number of batches per epoch/training (since, unlike DRL, common CV tasks have a lot of batches and epochs, I saw only a minor performance difference between the two frameworks).
However, I could not solve the problem, I asked on StackOverflow and even opened an issue, nobody answered yet. I personally prefer TensorFlow, so I don’t want to move to PyTorch unless I have to. I just wonder if anyone can help explain why or help me to improve the performance on a small number of batches.
Github Issue with reproducible code and more detailed explanation:
So I have a powerful machine,… at least I think I do. With a Geforce 3080 and all that. Anyways, I’m fairly new to the ML game. Really liked the Google’s AutoML where I just feed a spreadsheet and it did MAE, RMSLE, etc. But because I’m new, I can’t afford paying for node hours. Is it possible to basically run the same simulation on my Windows PC? Got Tensorflow installed, didn’t enable the GPU yet.
Speech is the most natural form of human communication. So, it’s not surprising that we’ve always wanted to interact with and command machines by voice. However, for conversational AI to provide a seamless, natural, and human-like experience, it needs to be trained on large amounts of data representative of the problem the model is trying … Continued
Speech is the most natural form of human communication. So, it’s not surprising that we’ve always wanted to interact with and command machines by voice. However, for conversational AI to provide a seamless, natural, and human-like experience, it needs to be trained on large amounts of data representative of the problem the model is trying to solve. The difficulty for machine learning teams is the scarcity of this high-quality, domain-specific data.
Companies are trying to solve this problem and accelerate the widespread adoption of conversational AI with innovative solutions that guarantee the scalability and internationality of models. NVIDIA and DefinedCrowd are two such companies. By providing machine learning engineers with a model-building toolkit and high-quality training data respectively, NVIDIA and DefinedCrowd integrate to create world-class AI simply, easily, and quickly.
DefinedCrowd, a one-stop shop for AI training data
I am the director of machine learning at DefinedCrowd, and our core business is providing high-quality AI training data to companies building world-class AI solutions. Our customers can access this data through DefinedData, an online marketplace of off-the-shelf AI training data available in multiple languages, domains, and recording types.
If you can’t find what you’re looking for in DefinedData, our workflows can serve as standalone or end-to-end data services to build any speech– or text-enabled AI architecture from scratch, to improve solutions already developed, or to evaluate models in production, all with the DefinedCrowd quality guarantee.
Creating conversational AI applications the easy way
NVIDIA NeMo is a toolkit built by NVIDIA for creating conversational AI applications. This toolkit includes collections of pretrained modules for automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech (TTS), enabling researchers and data scientists to easily compose complex neural network architectures and focus on designing their applications.
NeMo and DefinedCrowd integration
Here’s how to connect DefinedCrowd speech workflows to train and improve an ASR model using NVIDIA NeMo. The code can also be accessed on this Google Colab link.
Step 1: Install NeMo Toolkit and dependencies
# First, install NeMo Toolkit and dependencies to run this notebook
!apt-get install -y libsndfile1 ffmpeg
!pip install Cython
## Install NeMo dependencies in the correct versions
!pip install torchtext==0.8.0 torch==1.7.1 pytorch-lightning==1.2.2
## Install NeMo
!python -m pip install nemo_toolkit[all]==1.0.0b3
Step 2: Obtain data using the DefinedCrowd API
Here’s how to connect to the DefinedCrowd API to obtain speech collected data. For more information, see DefinedCrowd API (v2).
# For the demo, use a sandbox environment
auth_url = "https://sandbox-auth.definedcrowd.com"
api_url = "https://sandbox-api.definedcrowd.com"
# These variables should be obtained at the DefinedCrowd Enterprise Portal for your account.
client_id = ""
client_secret = ""
project_id = ""
# GET /projects/{project-id}/deliverables
headers = {"Authorization": "Bearer " + access_token}
response = requests.request(
"GET", f"{api_url}/projects/{project_id}/deliverables", headers=headers
)
if response.status_code == 200:
# Pretty print the response
print(json.dumps(response.json(), indent=4))
# Get the first deliverable ID
deliverable_id = response.json()[0]["id"]
[
{
"projectId": "eb324e45-c4f9-41e7-b5cf-655aa693ae75",
"id": "258f9e15-2937-4846-b9c3-3ae1164b7364",
"type": "Flat",
"fileName": "data_Flat_eb324e45-c4f9-41e7-b5cf-655aa693ae75_258f9e15-2937-4846-b9c3-3ae1164b7364_2021-03-22-14-34-37.zip",
"createdTimestamp": "2021-03-22T14:34:37.8037259",
"isPartial": false,
"downloadCount": 2,
"status": "Downloaded"
}
]
Final deliverable for speech data collection
# Name to give to the deliverable file
filename = "scripted_monologue_en_GB.zip"
# GET /projects/{project-id}/deliverables/{deliverable-id}/download
headers = {"Authorization": "Bearer " + access_token}
response = requests.request(
"GET",
f"{api_url}/projects/{project_id}/deliverables/{deliverable_id}/download/",
headers=headers,
)
if response.status_code == 200:
# save the deliverable file
with open(filename, "wb") as fp:
fp.write(response.content)
print("Deliverable file saved with success!")
Deliverable file saved with success!
# Extract the contents from the downloaded file
!unzip scripted_monologue_en_GB.zip &> /dev/null
!rm -f en-gb_single-scripted_Dataset.zip
Step 3: Analyze the speech dataset
Here’s how to analyze the data received from DefinedCrowd. The data is built of scripted speech data collected by the DefinedCrowd Neevo platform from several speakers in the UK (crowd members from DefinedCrowd).
Each row of the dataset contains information about the speech prompt, crowd member, device used, and the recording. The following data is found with this delivery:
Recording:
RecordingId
PromptId
Prompt
Audio File:
RelativeFileName
Duration
SampleRate
BitDepth
AudioCommunicationBand
RecordingEnvironment
Crowd Member:
SpeakerId
Gender
Age
Accent
LivingCountry
Recording Device:
Manufacturer
DeviceType
Domain
This data can be used for multiple purposes, but in this tutorial, I use it for improving an existent ASR model for British speakers.
import pandas as pd
# Look in the metadata file
dataset = pd.read_csv("metadata.tsv", sep="t", index_col=[0])
# Check the data for the first row
dataset.iloc[0]
RecordingId 165559628
PromptId 64977250
RelativeFileName Audio/165559628.wav
Prompt The Avengers' extinction.
Duration 00:00:02.815
SpeakerId 128209
Gender Female
Age 26
Manufacturer Apple
DeviceType iPhone 6s
Accent Suffolk
Domain generic
SampleRate 16000
BitDepth 16
AudioCommunicationBand Broadband
LivingCountry United Kingdom
Native True
RecordingEnvironment silent
Name: 0, dtype: object
# How many rows do you have?
len(dataset)
50000
# Check some examples from the dataset
import librosa
import IPython.display as ipd
for index, row in dataset.sample(4, random_state=1).iterrows():
print(f"Prompt: {dataset.iloc[index].Prompt}")
audio_file = dataset.iloc[index].RelativeFileName
# Load and listen to the audio file
audio, sample_rate = librosa.load(audio_file)
ipd.display(ipd.Audio(audio, rate=sample_rate))
After downloading the speech data from DefinedCrowd API, you must adapt it for the format expected by NeMo for ASR training. For this, you create manifests for the training and evaluation data, including each audio file’s metadata.
NeMo requires that you adapt the data to a particular manifest format. Each line corresponding to one audio sample, so the line count equals the number of samples represented by the manifest. A line must contain the path to an audio file, the corresponding transcript, and the audio sample duration. For example, here is what one line might look like in a NeMo-compatible manifest:
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
For the creation of the manifest, also standardize the transcripts.
import os
# Function to build a manifest
def build_manifest(dataframe, manifest_path):
with open(manifest_path, "w") as fout:
for index, row in dataframe.iterrows():
transcript = row["Prompt"]
# The model uses lowercased data for training/testing
transcript = transcript.lower()
# Removing linguistic marks (they are not necessary for this demo)
transcript = (
transcript.replace("", "")
.replace("", "")
.replace("[b_s/]", "")
.replace("[uni/]", "")
.replace("[v_n/]", "")
.replace("[filler/]", "")
.replace('"', "")
.replace("[n_s/]", "")
)
audio_path = row["RelativeFileName"]
# Get the audio duration
try:
duration = librosa.core.get_duration(filename=audio_path)
except Exception as e:
print("An error occurred: ", e)
if os.path.exists(audio_path):
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript,
}
json.dump(metadata, fout)
fout.write("n")
else:
continue
Step 5: Train and test splits
To test the quality of the model, you must reserve some data for model testing. Evaluate the model performance on this data.
import json
from sklearn.model_selection import train_test_split
# Split 10% for testing (500 prompts) and 90% for training (4500 prompts)
trainset, testset = train_test_split(dataset, test_size=0.1, random_state=1)
# Build the manifests
build_manifest(trainset, "train_manifest.json")
build_manifest(testset, "test_manifest.json")
Step 6: Configure the model
Here’s how to use the QuartzNet15x5 model as a base model for fine-tuning with the data. To improve the recognition of the dataset, benchmark the model performance on the base model and later, on the fine-tuned version. Some of the following functions were retrieved from the Nemo Tutorial on ASR.
# Import Nemo and the functions for ASR
import torch
import nemo
import nemo.collections.asr as nemo_asr
import logging
from nemo.utils import _Logger
# Set up the log level by NeMo
logger = _Logger()
logger.set_verbosity(logging.ERROR)
Step 7: Set training parameters
For training, NeMo uses a Python dictionary as data structure to keep all the parameters. For more information, see the NeMo ASR Config User Guide.
For this tutorial, load a preexisting file with the standard ASR configuration and change only the necessary fields.
## Download the config to use in this example
!mkdir configs
!wget -P configs/ https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/asr/conf/config.yaml &> /dev/null
# --- Config Information ---#
from ruamel.yaml import YAML
config_path = "./configs/config.yaml"
yaml = YAML(typ="safe")
with open(config_path) as f:
params = yaml.load(f)
Step 8: Download the base model
For the ASR model, use a pretrained QuartzNet15x5 model from the NGC catalog.
QuartzNet15x5 model trained on six datasets: LibriSpeech, Mozilla Common Voice (validated clips from en_1488h_2019-12-10), WSJ, Fisher, Switchboard, and NSC Singapore English. It was trained with Apex/Amp optimization level O1 for 600 epochs. The model achieves a WER of 3.79% on LibriSpeech dev-clean, and a WER of 10.05% on dev-other.
# This line downloads the pretrained QuartzNet15x5 model from NGC and instantiates it for you
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En", strict=False)
Step 9: Evaluate the base model performance
The word error rate (WER) is a valuable measurement tool for comparing different ASR model and evaluating improvements within one system. To obtain the results, assess how the model performs by using the testing set.
# Configure the model parameters for testing
# Parameters for training, validation, and testing are specified using the
# train_ds, validation_ds, and test_ds sections of your configuration file
# Bigger batch-size = bigger throughput
params["model"]["validation_ds"]["batch_size"] = 8
# Set up the test data loader and make sure the model is on GPU
params["model"]["validation_ds"]["manifest_filepath"] = "test_manifest.json"
quartznet.setup_test_data(test_data_config=params["model"]["validation_ds"])
# Comment out this line if you don't want to use GPU acceleration
_ = quartznet.cuda()
# Compute the WER metric between the hypothesis and predictions.
wer_numerators = []
wer_denominators = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` gives you:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
with torch.no_grad():
for test_batch in quartznet.test_dataloader():
input_signal, input_signal_length, targets, targets_lengths = [x.cuda() for x in test_batch]
log_probs, encoded_len, greedy_predictions = quartznet(
input_signal=input_signal,
input_signal_length=input_signal_length
)
# The model has a helper object to compute WER
quartznet._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_numerator, wer_denominator = quartznet._wer.compute()
wer_numerators.append(wer_numerator.detach().cpu().numpy())
wer_denominators.append(wer_denominator.detach().cpu().numpy())
# First, sum all numerators and denominators. Then, divide.
print(f"WER = {sum(wer_numerators)/sum(wer_denominators)*100:.2f}%")
WER = 39.70%
Step 10: Fine-tune the model
The base model got 39.7% of WER, which is not so good. Maybe providing some data from the same domain and language dialects can improve the ASR model. For simplification, train for only one epoch using DefinedCrowd’s data.
import pytorch_lightning as pl
from omegaconf import DictConfig
import copy
# Before training, you must provide the train manifest for training
params["model"]["train_ds"]["manifest_filepath"] = "train_manifest.json"
# Use the smaller learning rate for fine-tuning
new_opt = copy.deepcopy(params["model"]["optim"])
new_opt["lr"] = 0.001
quartznet.setup_optimization(optim_config=DictConfig(new_opt))
# Batch size depends on the GPU memory available
params["model"]["train_ds"]["batch_size"] = 8
# Point to the data to be used for fine-tuning as the training set
quartznet.setup_training_data(train_data_config=params["model"]["train_ds"])
# Clean the torch cache
torch.cuda.empty_cache()
# Now you can create a PyTorch Lightning trainer.
trainer = pl.Trainer(gpus=1, max_epochs=1)
# The fit function starts the training
trainer.fit(quartznet)
Step 11: Compare model performance
Compare the final model performance with the fine-tuned model that you received from training with additional data.
# Configure the model parameters for testing
params["model"]["validation_ds"]["batch_size"] = 8
# Set up the test data loader and make sure the model is on GPU
params["model"]["validation_ds"]["manifest_filepath"] = "test_manifest.json"
quartznet.setup_test_data(test_data_config=params["model"]["validation_ds"])
_ = quartznet.cuda()
# Compute the WER metric between the hypothesis and predictions.
wer_numerators = []
wer_denominators = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` gives you:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
with torch.no_grad():
for test_batch in quartznet.test_dataloader():
input_signal, input_signal_length, targets, targets_lengths = [x.cuda() for x in test_batch]
log_probs, encoded_len, greedy_predictions = quartznet(
input_signal=input_signal,
input_signal_length=input_signal_length
)
# The model has a helper object to compute WER
quartznet._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_numerator, wer_denominator = quartznet._wer.compute()
wer_numerators.append(wer_numerator.detach().cpu().numpy())
wer_denominators.append(wer_denominator.detach().cpu().numpy())
# First, sum all numerators and denominators. Then, divide.
print(f"WER = {sum(wer_numerators)/sum(wer_denominators)*100:.2f}%")
WER = 24.36%
After training new epochs of the neural network ASR architecture, I achieved a WER of 24.36%, which is an improvement over the initial 39.7% from the base model using only one epoch for training. For better results, consider using more epochs in the training.
Conclusion
In this tutorial, I demonstrated how to load speech data collected by DefinedCrowd and how to use it to train and measure the performance of an ASR model. I hope I have shown you how easy it is to create world-class AI solutions with NVIDIA and DefinedCrowd.
High-energy physics research aims to understand the mysteries of the universe by describing the fundamental constituents of matter and the interactions between them. Diverse experiments exist on Earth to re-create the first instants of the universe. Two examples of the most complex experiments in the world are at the Large Hadron Collider (LHC) at CERN … Continued
High-energy physics research aims to understand the mysteries of the universe by describing the fundamental constituents of matter and the interactions between them. Diverse experiments exist on Earth to re-create the first instants of the universe. Two examples of the most complex experiments in the world are at the Large Hadron Collider (LHC) at CERN and the Deep Underground Neutrino Experiment (DUNE) at Fermilab.
The LHC is home to the highest energy particle collisions in the world and the discovery of the Higgs boson. LHC detectors are like ultra–high-speed cameras that capture the remnants of those collisions every 25 nanoseconds to create a 5D image in space, time, and energy. LHC physicists collect huge datasets to find extremely rare events. Those events may give clues about the Higgs boson as a portal to new physics or the particle nature of dark matter.
The DUNE experiment sends a beam of particles called neutrinos from the west suburbs of Chicago to an underground mine 1,300 km away in South Dakota. There, a massive 40-kton detector is being constructed 1.5 km beneath the earth’s surface to observe these feebly interacting particles. Studying neutrinos can help us answer questions such as the origin of matter in the universe and the behavior of core-collapse supernova in the Milky Way galaxy.
These experiments consist of unique and cutting-edge particle detectors that create massive, complex, and rich datasets with billions of events. They require sophisticated algorithms to reconstruct and interpret the data.
Modern machine learning algorithms provide a powerful toolset to detect and classify particles, from familiar image-processing convolutional neural networks to newer graph neural network architectures. A full reconstruction of these particle collisions requires novel approaches to handle the computing challenge of processing so much raw data. In a series of studies, physicists from Fermilab, CERN, and university groups explored how to accelerate their data processing using NVIDIA Triton Inference Server.
In each event, charged particles interact with the liquid argon in the detector, liberating ionization electrons that drift across the detector volume under the influence of an electric field. These electrons induce signals as they pass through and are collected by a set of wire planes at the end of the drift path. Two spatial coordinates can be determined from the different angular orientations of the wires in each plane. The third coordinate can be determined from the drift time of the ionization electrons. As a result, a detailed 3D image of the neutrino interaction can be reconstructed.
The most computationally intensive step of the reconstruction process involves an ML algorithm that looks at 48×48 pixel cutouts, or patches. Those patches represent small sections of the full event and the algorithm identifies the particles in them. Importantly, over the entire ProtoDUNE-SP detector, there are thousands of 48×48 patches to be classified, such that a typical event may have approximately 55,000 patches to process. In the following section, we discuss the performance implications of this process and how using NVIDIA Triton Inference Server helps us to scale the deep learning inference.
Similarly, for the LHC, a series of neural networks can be used to process data from low-level cluster calibration and electron energy regression to jet (particle spray) classification.
Figure 3 shows how a similar paradigm is used for the LHC. Hits recorded by the calorimeter system are combined into clusters (zoomed-in section at right). These can then be further combined into higher-level reconstructed particle objects, such as the jet indicated at the bottom left. In simulated events such as this one, the reconstructed clusters can be related to the “truth” information from the simulation software (GEANT) to measure the accuracy of the algorithms.
Compute-intensive process
For the ProtoDUNE-SP detector, the reconstruction processing time is dominated by running convolutional neural network inference for the thousands of patches in each event. When you’re running inference on a typical CPU, this consumes 65% of the total time for reconstruction. The current dataset consists of 400 TB from hundreds of millions of neutrino events. The team decided to use NVIDIA T4 GPUs to speed up this most compute-intensive process. In the initial trial phase, they used T4 instances on Google Cloud.
In production, thousands of client nodes feed detector data (images) into the reconstruction process. The scale of computing is so large that a distributed worldwide grid of computing resources is needed. This poses challenges to coordinating and optimizing resources shared by different sites worldwide. To cope with these challenges, the team decided to use a novel inference-as-a-service computing paradigm for the first time.
Inference as a service with NVIDIA Triton Inference Server
The team implemented their generic approach, called SONIC (Services for Optimized Network Inference on Coprocessors), for inference as a service using NVIDIA Triton Inference Server. This technology is available from the NGC Catalog, a hub for GPU-optimized AI containers, models, and SDKs built to simplify and accelerate AI workflows.
NVIDIA Triton simplifies the deployment of AI models at scale in production. It’s an open-source inference serving software package that helps teams deploy trained AI models:
From any framework: TensorFlow, TensorRT, PyTorch, ONNX Runtime, or a custom framework
From any storage: Local, Google Cloud Platform, Amazon S3, or Microsoft Azure Storage
On any GPU- or CPU-based infrastructure: Cloud, data center, or edge
The team deployed the NVIDIA Triton server as a container and used Kubernetes to orchestrate the various cloud resources. Each GPU server in the cluster runs an instance of the NVIDIA Triton server. The clients run on separate, CPU-only nodes and send inference requests using gRPC over the network. Kubernetes handles load balancing and resource scaling for the GPU cluster.
Outcome
The use of T4 GPUs resulted in a 17x speed-up of the most time-consuming ML module of the workflow: track and particle shower hit identification. Overall workflow (event processing time) was accelerated by a factor of 2.7x.
The following are key benefits that the team achieved:
No disruption. The workflow was accelerated without disruption to any of the other algorithms or experiment software.
Allocation flexibility. In this deployment, many client nodes sent requests to a single GPU. This allowed heterogeneous resources to be allocated and reallocated based on demand and task, providing significant flexibility and potential cost reduction.
Reduced dependencies. There’s a reduced dependency on open-source ML frameworks in the experimental code base. Otherwise, the experiment would be required to integrate and support separate C++ APIs for every framework in use.
Concurrent use. NVIDIA Triton also used all available GPUs automatically when the servers had multiple GPUs, further increasing the flexibility of the server. In addition, NVIDIA Triton can execute multiple models from various ML frameworks concurrently.
Dynamic batching. NVIDIA Triton provides dynamic batching, which combines multiple requests into optimally sized batches to perform inference as efficiently as possible for the task at hand. This effectively enables simultaneous processing of multiple events without any changes to the experiment software framework.
To scale the NVIDIA T4 GPU throughput flexibly, we used a Google Kubernetes Engine (GKE) cluster for server-side workloads. Kubernetes Ingress was used as a load-balancing service to distribute incoming network traffic among the NVIDIA Triton pods. Prometheus-based monitoring was used for the following:
System metrics from the underlying virtual machine
Kubernetes metrics for the overall health and state of the cluster
Inference-specific metrics gathered from NVIDIA Triton through a built-in Prometheus publisher
All metrics were visualized through a Grafana instance, also deployed within the same cluster. The team kept the pod-to-node ratio at 1:1 throughout the studies, with each pod running an instance of NVIDIA Triton Inference Server (v20.02-py3) from NGC. The throughput was maximized when 68 CPU client processes sent requests to a single remote GPU. The exact ratio depends on the algorithm and workflow.
Summary
The offline neutrino reconstruction workflow was accelerated by deploying ML models on NVIDIA T4 GPUs. NVIDIA Triton and Kubernetes helped the team implement inference as a scalable service in a flexible and cost-effective way. Though we focused on a result specific to neutrino physics, a similar result was achieved for the LHC and constitutes a successful proof of concept. These results pave the way for deploying DL inference as a service at scale in high energy physics experiments.
For more information, see the following resources:
We would like to thank, globally, the multi-institutional team that performed these neutrino and LHC studies. For more information about their work, see fastmachinelearning.org.Featured image of Protodune detector taken by Maximilien Brice from CERN.
Switzerland-based Assaia International AG is deploying a deep learning solution at Cincinnati/Northern Kentucky International Airport (CVG) to help airport employees monitor the turnaround time between flights.
Switzerland-based Assaia International AG, an NVIDIA Metropolis partner and member of theNVIDIA Inception acceleration platform for AI startups, is deploying a deep learning solution at Cincinnati/Northern Kentucky International Airport (CVG) to help airport employees monitor the turnaround time between flights.
The Turnaround Control tool will help the airport work with its airline partners to improve turnaround transparency, identify situations that most often cause delayed flights, and notify employees of deviations from the schedule.
“Assaia’s technology adds critical data points to CVG’s early-stage neural network for operational advancements,” said Brian Cobb, the airport’s chief innovation officer. “Structured data generated by artificial intelligence will provide information to make decisions, optimize airside processes, and improve efficiency and safety.”
The company usesNVIDIA Jetson AGX Xavier modules and the NVIDIA Metropolis intelligent video analytics platform to run image recognition and predictive analysis algorithms on video streams from multiple cameras around an airport.
By installing cameras at several gates, airports can optimize the cleaning, restocking and servicing of planes — saving time for customers and costs for the airlines.
Assaia is also deploying AI solutions at London Gatwick Airport and Seattle-Tacoma International Airport. Watch a replay from the recent GPU Technology Conference for more:
You may have used AI in your smartphone or smart speaker, but have you seen how it comes alive in an artist’s brush stroke, how it animates artificial limbs or assists astronauts in Earth’s orbit? The latest video in the “I Am AI” series — the annual scene setter for the keynote at NVIDIA’s GTC Read article >
Today, NVIDIA is announcing the availability of nvCOMP version 2.0.0. This software can be downloaded now free for members of the NVIDIA Developer Program.
Today, NVIDIA is announcing the availability of nvCOMP version 2.0.0. This software can be downloaded now free for members of the NVIDIA Developer Program.
nvCOMP is a CUDA library that features generic compression interfaces to enable developers to use high-performance GPU compressors in their applications.
nvCOMP 2.0.0 includes Cascaded, LZ4, and Snappy compression methods. It also adds support for the external Bitcomp and GDeflate methods. Cascaded compression methods demonstrate high performance with up to 500 GB/s throughput and a high compression ratio of up to 80x on numerical data from analytical workloads. Snappy and LZ4 methods can achieve up to 100 GB/s compression and decompression throughput depending on the dataset, and show good compression ratios for arbitrary byte streams.
NVIDIA AI Platform Smashes Every MLPerf Category, from Data Center to EdgeSANTA CLARA, Calif., April 21, 2021 (GLOBE NEWSWIRE) — NVIDIA today announced that its AI inference platform, newly …
Recommender systems drive engagement on many of the most popular online platforms. As data volume grows exponentially, data scientists increasingly turn from traditional machine learning methods to highly expressive, deep learning models to improve recommendation quality. Often, the recommendations are framed as modeling the completion of a user-item matrix, in which the user-item entry is … Continued
Recommender systems drive engagement on many of the most popular online platforms. As data volume grows exponentially, data scientists increasingly turn from traditional machine learning methods to highly expressive, deep learning models to improve recommendation quality. Often, the recommendations are framed as modeling the completion of a user-item matrix, in which the user-item entry is the user’s interaction with that item.
Most current online recommender systems are implicit rating-based, clickthrough rate (CTR) prediction tasks. The model estimates the probability of positive action (click), given user and item characteristics. One of the most popular DNN-based methods is Google’s Wide & Deep Learning for Recommender Systems, which has emerged as a general tool for solving CTR prediction tasks, thanks to its power of generalization (Deep) and memorization (Wide).
The Wide & Deep model falls into a category of content-based recommender models that are like Facebook’s deep learning recommendation model (DLRM), where input to the model consists of characteristics of the User and Item and the output is some form of rating.
In this post, we detail the new TensorFlow2 implementation of the Wide & Deep model that was recently added to the NVIDIA Deep Learning Examples repository. It provides the end-to-end training for easily reproducible results in training the model, using the Kaggle Outbrain Click Prediction Challenge dataset. This implementation touches on two important aspects of building recommender systems: dataset preprocessing and model training.
First, we introduce the Wide & Deep model and the dataset. Then, we give details on the preprocessing completed in two variants, CPU and GPU. Finally, we discuss aspects of model convergence, training stability, and performance, both for training and evaluation.
Wide & Deep model overview
Wide & Deep refers to a class of networks that use the output of two parts working in parallel—a wide model and a deep model—to make binary prediction of CTR. The wide part is a linear model of features together with their transforms, responsible for the memorization of feature interactions. The deep part is a series of fully connected layers, allowing the model better generalization for unseen cross-features interactions. The model can handle both numerical continuous features as well as categorical features represented as dense embeddings. Figure 1 shows the architecture of the model. We changed the size of the deep part from the original of 1024, 512, 256 into five fully connected layers of 1024 neurons.
Outbrain dataset
The original Wide & Deep paper trains on the Google Play dataset. Because this data is proprietary to Google, we chose a publicly available dataset for easy reproduction. As a reference dataset, we used the Kaggle Outbrain Click Prediction Challenge data. This dataset is preprocessed to obtain a subset of the features engineered by the 19th-place finisher in the Kaggle Outbrain Click Prediction Challenge. This competition challenged competitors to predict the likelihood of a clickthrough for a particular website ad. Competitors were given information about the user, display, document, and ad to train their models. For more information, see Outbrain Click Prediction.
The Outbrain dataset is preprocessed to get feature input for the model. Each sample in the dataset consists of features of the Request (User) and Item, together with a binary output label. Request-level features describe the person and context to which to make recommendations, whereas Item-level features describe those objects to consider recommending. In the Outbrain dataset, these are ads. Request– and Item-level features contain both numerical features that you can input directly to the network. Categorical variables are represented as trainable embeddings of various dimensions. For more information about feature counts, cardinalities, embedding dimensions, and other dataset characteristics, see the WideAndDeep readme file on GitHub.
Preprocessing
As in every other recommender system, preprocessing is a key for efficient recommendation here. We present and compare two dataset preprocessing workflows: Spark-CPU and NVTabular GPU. Both produce datasets of the same number, type, and meaning of features so that the model is agnostic to the type of dataset preprocessing used. The presented preprocessing aims to produce the dataset in a form of pre-batched TFRecords to be consumed by the data loader during model training.
Scope of preprocessing
The preprocessing is described in detail in the readme of the Deep Learning Examples repository. In this post, we give only the outlook on the scope of data wrangling to create the final 26 features: 13 categorical and 13 numerical, obtained from the original Outbrain dataset. Both of the workflows consist of the following operations:
Separating out the validation set for cross-validation.
Filling missing data with mode, median, or imputed values.
Joining click data, ad metadata, and document category, topic, and entity tables to create an enriched table.
Computing seven CTRs for ads grouped by seven features.
Computing the attribute cosine similarity between the landing page and featured ad.
Math transformations of the numeric features (logarithmic, scaling, and binning).
Categorizing data using hash-bucketing.
Storing the resulting set of features in pre-batched TFRecord format.
Comparison of preprocessing workflows
To compare the NVTabular and Spark workflows, we built both from a known-good Spark-CPU workflow, included in the NVIDIA Wide & Deep TensorFlow 2 GitHub repository. For simplicity, we limited the number of dataset features to calculate during preprocessing. We chose the most common ones used in recommender systems that both workflows (Spark and NVTabular) support. Because NVTabular is a relatively new framework still in active development, we limited the scope of comparison to features supported by the NVTabular library.
When comparing Spark and NVTabular, we extracted the most important metrics that influence the choice of framework in target preprocessing. Table 1 presents a snapshot comparison of two types of preprocessing using the following metrics:
Threshold result. The necessity of achieving MAP@12 greater than the arbitrary chosen threshold of 0.655 for the Outbrain dataset.
Source code lines. The lines needed to achieve the set of features that the model uses for training. This single metric tries to capture how difficult it is to create and maintain the production code. It also gives an intuition about the level of difficulty when experimenting with adding new dataset features or changing existing ones.
Total RAM consumption. This estimates the size and type of machine needed to perform preprocessing.
Preprocessing time that is critical for recommender systems. In production environments where you must retrain the model with new data, this metric is strictly bounded with the necessity of dataset preprocessing. You can remove a too-long preprocessing time for some applications. When that time is short, you can even include the preprocessing in end-to-end training. Test the hypothesis of variable importance with the hyperparameter tuning of the network.
We did not enforce 1:1 parity between the datasets, as convergence accuracy proves the validity of the features.
CPU preprocessing: Spark on NVIDIA DGX-1
CPU preprocessing: Spark on NVIDIA DGX A100
GPU preprocessing: NVTabular on DGX-1 8-GPU
GPU preprocessing: NVTabular DGX A100 8-GPU
Lines of code
~1,500
~1,500
~500
~500
Top RAM consumption [GB]
167.0
223.4
48.7
50.6
Top VRAM consumption per GPU [GB]
0
0
13
67
Preprocessing time [min]
45.6
38.5
3.9
2.3
Table 1. A comparison of CPU preprocessing (Spark) and GPU preprocessing (NVTabular).
Convergence accuracy
On the chosen metric for the Outbrain dataset, Mean Average Precision at 12 (MAP@12), both the features produced by Spark-CPU and NVTabular achieve similar convergence accuracy, MAP@12>0.655.
Hardware requirements
You can run the NVTabular and Spark-CPU versions on DGX-1 V100 and DGX A100 supercomputers. Spark-CPU consumes around 170 GB of RAM while the RAM footprint of NVTabular is about 3x smaller. NVTabular can run successively even on a single-GPU machine and still be an order of magnitude faster than Spark-CPU without the need of memory-optimized computers.
End-to-end preprocessing time
The end-to-end preprocessing time is 17x faster on GPU for DGX A100 and 12x on GPU on DGX-1 comparing Spark CPU and NVTabular GPU preprocessing.
Code brevity and legibility
The Spark code to generate the features spans over approximately 1,500 lines, while the NVTabular code is about 500 lines. The brevity in the NVTabular workflow also lends itself to legibility, as fewer lines of code and descriptive function signatures make it obvious what a given line is trying to accomplish.
The following list contains samples with side-to-side comparisons of Spark and NVTabular, showing the increase of code brevity and legibility in favor of NVTabular. The operation used in both is taking the TF-IDF cosine similarity between an ad and its landing page.
Training the model is analyzed based on the following criteria:
Reaching evaluation metrics.
Fast and stable training (forward and backward pass): Constantly reaching an evaluation metric not dependent on initialization, hardware architecture, or other training features.
Fast scoring of the evaluation set: Reaching performance throughput to mimic model’s behavior in production.
We used the Mean Average Precision at 12 (MAP@12) metric, the same as used in the original Outbrain Kaggle competition. Direct comparison of the obtained accuracies is unjustified because, for the original Kaggle competition, there were data leaks that could be artificially used for post-processing of model results, resulting in higher MAP@12 score.
As there are multiple options for setups—two hardware architectures (A100 and V100), multiple floating-point number precisions (FP32, TF32, AMP), two versions of preprocessing (Spark-CPU and NVTabular), and the XLA optimizer, it is essential to be sure that the convergence is achieved in each setup. We performed multiple stability tests for accuracy that prove achieving MAP@12 above the selected threshold, regardless of training setup, training stability, and the impact of AMP on accuracy.
Training performance results
As stated earlier, we wanted the model to be fast in training. You can measure this in two ways: by the model throughput [samples/s] and time to train [min]. When training on CPU compared to GPU, you can experience speedups up to 108x for the NVIDIA Ampere Architecture and TF32 precision (Figure 4).
Single-GPU configurations experience up to 1.2x speedup while using AMP for Ampere architecture. This number is even better for Volta, where the speedup is over 3x. Introducing multi-GPU training in a strong scaling mode ends up in speedups of 1.2x–4.6x in comparison to single-GPU training. Comparison of Ampere and Volta architectures for FP32 and TF32 training, respectively, shows a speedup of 2.2x (single GPU) to 4.5x (eight GPUs). Ampere is also 1.4x– 1.8x faster than Volta for AMP training. Bearing in mind that you don’t lose any accuracy with AMP, XLA and multi-GPU, this brings a huge value to recommender systems models.
Training time improves significantly when training on GPU in comparison to CPU and for best configuration is faster over 100x. TFRecords dataset consumes around 40GB of disk space. For best configuration of training (8x A100, TF32 precision, XLA on) this implementation of the Wide & Deep model performs a 20-epoch training within eight minutes, resulting in less than 25[s] per epoch during training.
Evaluating performance results
Having a model that trains with such throughput is beneficial. In fact, in offline scenarios, another parameter is important: how fast you can evaluate all pairs of users and items. If you have 106 distinct users and only 103 distinct items, that gives you 109 different user-item pairs. Fast evaluation on training models is a key concept. Figure 6 shows the evaluation performance for A100 and V100 varying batch size.
Recommendation serving usually reflects the scenario that a single batch contains scoring all items for a single user. Using the presented native evaluation, you might expect over 1,000 users scored for 4,096 batch (items) for eight GPUs A100 in TF32 precision.
End-to-end training
We define the end-to-end training time to be the entire time to preprocess the data and train the model. It is important to account for these two steps, because the feature engineering steps and training with accuracy measurement are repeated. Shortening the end-to-end training is the equivalent of bringing the model to production faster or performing more experiments at the same time. With GPU preprocessing, you can experience a massive decrease in end-to-end training (Figure 7).
For both DGX-1 and DGX A100, the speedup in end-to-end training is tremendous. Because this setup involved training the model on GPU for both Spark and NVTabular, the speedup comes from the preprocessing steps. It results in up to 3.8x faster end-to-end training for DGX-1 and up to 5.4x for DGX A100. When using GPU for preprocessing the fraction, the important aspect is the decreased time that the preprocessing step takes in total end-to-end training, from ~75% for Spark GPU down to ~25% for NVTabular.
Summary
In this post, we demonstrated an end-to-end preprocessing and training pipeline for the Wide & Deep model. We showed you how to get at least a 10x reduction in dataset preprocessing time using GPU preprocessing with NVTabular. Such an incredible speedup enables you to quickly verify your hypotheses about the data and bring new features to production.
We also showed the stability of training while reaching the evaluation score of MAP@12 for multiple training setups:
NVIDIA Ampere Architecture
NVIDIA Volta Architecture
Multi-GPU training
AMP training
XLA
Thanks to the great speedup that these features provide, you can train on the 8-GB dataset in less than 25s/epoch. The model throughput compared to CPU is over 100x higher on GPU. Finally, we showed the evaluation throughput that achieves 21Mln [samples] from a model checkpoint.
Future work for the Wide & Deep TensorFlow 2 implementation will concentrate on inference in Triton Server, improving the data loader to support parquet input files, and upgrading preprocessing in NVTabular to a recently released API version.
We encourage you to check our implementation of the Wide & Deep model in the NVIDIA DeepLearningExamples GitHub repository. In the comments, please tell us how you plan to adopt and extend this project.