Categories
Misc

Upcoming Workshop: Training & Tuning Text-to-Speech with NVIDIA NeMo and W&B

Green neon waveLearn to train an end-to-end TTS system and track experiments in this live workshop on December 8. Set up the environment, review code blocks, test the model,…Green neon wave

Learn to train an end-to-end TTS system and track experiments in this live workshop on December 8. Set up the environment, review code blocks, test the model, and more.

Categories
Offsites

Talking to Robots in Real Time

A grand vision in robot learning, going back to the SHRDLU experiments in the late 1960s, is that of helpful robots that inhabit human spaces and follow a wide variety of natural language commands. Over the last few years, there have been significant advances in the application of machine learning (ML) for instruction following, both in simulation and in real world systems. Recent Palm-SayCan work has produced robots that leverage language models to plan long-horizon behaviors and reason about abstract goals. Code as Policies has shown that code-generating language models combined with pre-trained perception systems can produce language conditioned policies for zero shot robot manipulation. Despite this progress, an important missing property of current “language in, actions out” robot learning systems is real time interaction with humans.

Ideally, robots of the future would react in real time to any relevant task a user could describe in natural language. Particularly in open human environments, it may be important for end users to customize robot behavior as it is happening, offering quick corrections (“stop, move your arm up a bit”) or specifying constraints (“nudge that slowly to the right”). Furthermore, real-time language could make it easier for people and robots to collaborate on complex, long-horizon tasks, with people iteratively and interactively guiding robot manipulation with occasional language feedback.

The challenges of open-vocabulary language following. To be successfully guided through a long horizon task like “put all the blocks in a vertical line”, a robot must respond precisely to a wide variety of commands, including small corrective behaviors like “nudge the red circle right a bit”.

However, getting robots to follow open vocabulary language poses a significant challenge from a ML perspective. This is a setting with an inherently large number of tasks, including many small corrective behaviors. Existing multitask learning setups make use of curated imitation learning datasets or complex reinforcement learning (RL) reward functions to drive the learning of each task, and this significant per-task effort is difficult to scale beyond a small predefined set. Thus, a critical open question in the open vocabulary setting is: how can we scale the collection of robot data to include not dozens, but hundreds of thousands of behaviors in an environment, and how can we connect all these behaviors to the natural language an end user might actually provide?

In Interactive Language, we present a large scale imitation learning framework for producing real-time, open vocabulary language-conditionable robots. After training with our approach, we find that an individual policy is capable of addressing over 87,000 unique instructions (an order of magnitude larger than prior works), with an estimated average success rate of 93.5%. We are also excited to announce the release of Language-Table, the largest available language-annotated robot dataset, which we hope will drive further research focused on real-time language-controllable robots.

Guiding robots with real time language.

Real Time Language-Controllable Robots

Key to our approach is a scalable recipe for creating large, diverse language-conditioned robot demonstration datasets. Unlike prior setups that define all the skills up front and then collect curated demonstrations for each skill, we continuously collect data across multiple robots without scene resets or any low-level skill segmentation. All data, including failure data (e.g., knocking blocks off a table), goes through a hindsight language relabeling process to be paired with text. Here, annotators watch long robot videos to identify as many behaviors as possible, marking when each began and ended, and use freeform natural language to describe each segment. Importantly, in contrast to prior instruction following setups, all skills used for training emerge bottom up from the data itself rather than being determined upfront by researchers.

Our learning approach and architecture are intentionally straightforward. Our robot policy is a cross-attention transformer, mapping 5hz video and text to 5hz robot actions, using a standard supervised learning behavioral cloning objective with no auxiliary losses. At test time, new spoken commands can be sent to the policy (via speech-to-text) at any time up to 5hz.

Interactive Language: an imitation learning system for producing real time language-controllable robots.

Open Source Release: Language-Table Dataset and Benchmark

This annotation process allowed us to collect the Language-Table dataset, which contains over 440k real and 180k simulated demonstrations of the robot performing a language command, along with the sequence of actions the robot took during the demonstration. This is the largest language-conditioned robot demonstration dataset of its kind, by an order of magnitude. Language-Table comes with a simulated imitation learning benchmark that we use to perform model selection, which can be used to evaluate new instruction following architectures or approaches.

Dataset # Trajectories (k)     # Unique (k)     Physical Actions     Real     Available
Episodic Demonstrations
BC-Z 25 0.1
SayCan 68 0.5
Playhouse 1,097 779
Hindsight Language Labeling
BLOCKS 30 n/a
LangLFP 10 n/a
LOREL 6 1.7
CALVIN 20 0.4
Language-Table (real + sim) 623 (442+181) 206 (127+79)

We compare Language-Table to existing robot datasets, highlighting proportions of simulated (red) or real (blue) robot data, the number of trajectories collected, and the number of unique language describable tasks.

Learned Real Time Language Behaviors

Examples of short horizon instructions the robot is capable of following, sampled randomly from the full set of over 87,000.

Short-Horizon Instruction Success
(87,000 more…)
push the blue triangle to the top left corner    80.0%
separate the red star and red circle 100.0%
nudge the yellow heart a bit right 80.0%
place the red star above the blue cube 90.0%
point your arm at the blue triangle 100.0%
push the group of blocks left a bit 100.0%
Average over 87k, CI 95% 93.5% +- 3.42%

95% Confidence interval (CI) on the average success of an individual Interactive Language policy over 87,000 unique natural language instructions.

We find that interesting new capabilities arise when robots are able to follow real time language. We show that users can walk robots through complex long-horizon sequences using only natural language to solve for goals that require multiple minutes of precise, coordinated control (e.g., “make a smiley face out of the blocks with green eyes” or “place all the blocks in a vertical line”). Because the robot is trained to follow open vocabulary language, we see it can react to a diverse set of verbal corrections (e.g., “nudge the red star slightly right”) that might otherwise be difficult to enumerate up front.

Examples of long horizon goals reached under real time human language guidance.

Finally, we see that real time language allows for new modes of robot data collection. For example, a single human operator can control four robots simultaneously using only spoken language. This has the potential to scale up the collection of robot data in the future without requiring undivided human attention for each robot.

One operator controlling multiple robots at once with spoken language.

Conclusion

While currently limited to a tabletop with a fixed set of objects, Interactive Language shows initial evidence that large scale imitation learning can indeed produce real time interactable robots that follow freeform end user commands. We open source Language-Table, the largest language conditioned real-world robot demonstration dataset of its kind and an associated simulated benchmark, to spur progress in real time language control of physical robots. We believe the utility of this dataset may not only be limited to robot control, but may provide an interesting starting point for studying language- and action-conditioned video prediction, robot video-conditioned language modeling, or a host of other interesting active questions in the broader ML context. See our paper and GitHub page to learn more.

Acknowledgements

We would like to thank everyone who supported this research. This includes robot teleoperators: Alex Luong, Armando Reyes, Elio Prado, Eric Tran, Gavin Gonzalez, Jodexty Therlonge, Joel Magpantay, Rochelle Dela Cruz, Samuel Wan, Sarah Nguyen, Scott Lehrer, Norine Rosales, Tran Pham, Kyle Gajadhar, Reece Mungal, and Nikauleene Andrews; robot hardware support and teleoperation coordination: Sean Snyder, Spencer Goodrich, Cameron Burns, Jorge Aldaco, Jonathan Vela; data operations and infrastructure: Muqthar Mohammad, Mitta Kumar, Arnab Bose, Wayne Gramlich; and the many who helped provide language labeling of the datasets. We would also like to thank Pierre Sermanet, Debidatta Dwibedi, Michael Ryoo, Brian Ichter and Vincent Vanhoucke for their invaluable advice and support.

Categories
Misc

Improving Machine Learning Security Skills at a DEF CON Competition

Letters, numbers, and padlocks on black backgroundMachine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the…Letters, numbers, and padlocks on black background

Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the intersection of the information security and data science domains. 

While the state-of-the-art moves forward, there is no clear onboarding and learning path for securing and testing machine learning systems. How, then, should interested practitioners begin developing machine learning security skills? You could read related articles on arXiv, but how about practical steps?

Competitions offer one promising opportunity. NVIDIA recently helped run an innovative ML security competition at the DEF CON 30 hacking and security conference. Hosted by AI Village, the competition drew more than 3,000 participants. It aimed to introduce attendees to each other and to the field of ML security. The competition proved to be a valuable opportunity for participants to develop and improve their machine learning security skills.

NVIDIA AI Red Team and AI Village

To proactively test and assess the security of NVIDIA machine learning offerings, the NVIDIA AI Red Team has been expanding. While this team consists of experienced security and data professionals, they recognized a need to develop ML security talent across the industry. With more exposure and education, data and security practitioners are likely to improve the security of their deployed machine learning systems.

AI Village is a community of data scientists and hackers working to educate on the topic of artificial intelligence (AI) in security and privacy. The community holds events at DEF CON each year. 

The NVIDIA AI Red Team and AI Village joined together at DEF CON 30 to engage the information security community with a machine learning security competition. The topic was potentially new to many attendees. Members of the AI Village created challenges designed to teach and test elements of ML security knowledge. In addition to NVIDIA, these members represented AWS Security, Orang Labs, and NetSec Explained

AI Village Capture the Flag Competition

Capture the Flag (CTF) competitions include multiple challenges. Competitors play through the challenges and collect flags for those that are completed successfully. These flags are assigned various point values based on the level of challenge. Competitors win by collecting the most points. 

With this familiar format in mind, the AI Village and NVIDIA AI Red Team built The AI Village CTF @ DEFCON. Organizers partnered with Kaggle to use a platform familiar to the machine learning community. Similar to information security CTFs, Kaggle competitions provide a format for ML researchers to compete on discrete problems. 

Partnering with Kaggle provided the competition with a flexible and scalable platform that paired compute and data hosting with documentation and scoring. Although the challenge servers are no longer active, you can view the challenge descriptions.

Competitors reported onboarding and moving through the challenges with ease, with minimal additional infrastructure required from the AI Village. Furthermore, Kaggle has a large audience of skilled data scientists and machine learning engineers who were excited to explore the security domain. Kaggle also generously offered ongoing support and $25,000 in prizes. We could not have asked for a better partner for this event.

Over the month-long competition, over 3,000 competitors hacked their way through 22 challenges. This far exceeded expectations and included participants from over 70 countries, from first-time Kagglers to Grandmasters. The event succeeded in bringing the traditional information security and machine learning communities together to tackle a range of challenges from this new domain of ML security. 

Competitors used publicly available tools and innovative technique applications such as open source research, masking, and dimensionality reduction. In the process, they often reimplemented attacks from academic literature as well. 

Because one challenge remained unsolved, there was always a chance for someone to rise to the top of the leaderboard. For the final two weeks of the competition, the Kaggle Discussion Board and AI Village Discord were abuzz with theories and explorations of the remaining unsolved challenge. The organizers were checking hourly for a buzzer-beating leaderboard shift. Check out the challenge solutions.

A graph showing player scores over the month-long AI Village Capture the Flag Competition. 
Figure 1. Player scores over the month-long AI Village Capture the Flag Competition 

Inference Challenge

In the Inference Challenge, participants had to execute a membership inference attack to identify training samples. They only had API access to an image classifier. When done successfully, the competitors would identify images that showed characters of the flag. 

Some competitors chose to randomly generate the images by permuting pixel values, effectively brute-forcing the problem. Other competitors assumed that the training data may have included a standard dataset and used EMNIST as their source data, leveraging open source data. Others made use of the Adversarial Robustness Toolbox, producing output similar to what is shown in Figure 2. 

Monochromatic pixels spelling ‘D3FC0N’
Figure 2. Example output from the Inference Challenge, which spells D3FCON

Whatever method used, a successful challenger would be rewarded with the flag, spelling D3FC0N. This leetspeak encoding of the DEF CON conference name was used in several places on the conference website. 

Crop2 Challenge

Research and out-of-the-box thinking often help to solve CTF challenges. For instance, the one unsolved challenge in the competition was Crop2. In Crop2, participants were given a poisoned cropping model and had to create the poisoned sample (within some error bounds). They had one training data example to work with (Figure 3). 

A multicolored 3x3 grid of circles in squares.
Figure 3. A sample training image provided to competitors

This is a difficult problem without an efficient, standard algorithmic solution. When you think about all of the pixels in an image and all of the possible pixel values across three color channels, the search space explodes to over 800 billion options. Instead, competitors could combine reverse engineering, open source research, and assumptions to reduce the number of combinations.

After the competition ended, organizers gave hints to help competitors solve the Crop2 Challenge. Some of the key hints included using open source research to determine that pixel colors likely were generated by matplotlib default colormaps. This greatly reduces the search space into the hundreds of thousands. 

By making these informed assumptions, one competitor was eventually able to reach the Crop2 Challenge solution. One trait of great hackers is tenacity: still working tirelessly after the competition ended, this competitor diligently worked through the provided hints. The competitor reported that a hint “helped me realize that we only needed to use nine colors. Mate, I’d been fiddling around with 16 million. This made the search space manageable.” 

Competitor notebooks

Check out some of our favorite notebooks from competitors:

  1. Chris Deotte – From a member of the Kaggle Grandmasters of NVIDIA (KGMoN), these solutions are very well organized and documented. We recommend Secret Sloth in particular.
  2. Eric Bouteillon – Watch the flag appear character-by-character in Excuse Me. Also notice the different solve techniques for the MATH challenges. Have you heard of silhouette score?
  3. John MacGillivray – John deduced that the Hotterdog model was based on MobileNet, enabling an offline attack. Great tradecraft.
  4. Fournierp – A comprehensive writeup about model inversion for the Inference Challenge, written from scratch. You can also check out the MIFace in the Adversarial Robustness Toolbox.
  5. Eoin O – Learn how you could have solved the Crop2 Challenge. More than 3,000 competitors tried to solve it for the greater part of a month. The day after the competition ended, organizers released several hints. Within a few hours, it was solved. It was great to see all of the competitors collaborating in Discord and the Kaggle Discussion Board after the competition ended.

Summary

The AI Village CTF @ DEF CON 30 competition showed that there is a significant appetite in both the security and data professions to improve machine learning security skills. As ML systems are deployed in increasingly security-critical contexts, it will become imperative to train professionals and develop tools and methods for security development, deployment, and testing. 

NVIDIA will continue driving innovation with a robust and secure ecosystem for AI, from embedded devices and laptops to supercomputers and the cloud. As part of this effort, our AI Red Team will empower ML security research and testing internally and establish security practices across the industry. We will host competitions, workshops, and release research and security tools in the future. If you’re interested in participating, contact us at threatops@nvidia.com.

Additional resources

Categories
Misc

Explainer: What Is Denoising?

Denoising is an advanced technique used to decrease grainy spots and discoloration in images while minimizing the loss of quality.

Denoising is an advanced technique used to decrease grainy spots and discoloration in images while minimizing the loss of quality.

Categories
Misc

Designing an Optimal AI Inference Pipeline for Autonomous Driving

Self-driving cars must be able to detect objects quickly and accurately to ensure the safety of their drivers and other drivers on the road. Due to this need…

Self-driving cars must be able to detect objects quickly and accurately to ensure the safety of their drivers and other drivers on the road. Due to this need for real-time processing in autonomous driving (AD) and visual inspection use cases, multiple AI models with preprocessing and postprocessing logic are combined in a pipeline and used for machine learning (ML) inference.

Speedup is required in every step of the pipeline to ensure a low latency workflow. Latency is the time it takes to get the inference response. Faster processing of AD data will enable more efficient analysis and use of the information, creating a safer driving environment. A delay with any single aspect can slow down the entire pipeline. 

To achieve a low latency inference workflow, electric vehicle manufacturer NIO integrated NVIDIA Triton Inference Server into their AD inference pipeline. NVIDIA Triton Inference Server is an open source multiframework inference serving software.

This post explains how NIO orchestrated its pipeline of image preprocessing and postprocessing and AI models with NVIDIA Triton on GPUs. It also shows how NIO reduced network transmission to successfully speed up their AI inference workflow for AD use cases.

Join the NVIDIA Triton and NVIDIA TensorRT community and stay current on the latest product updates, bug fixes, content, best practices, and more.

Faster AI inference for real-time response

NIO designs, develops, jointly manufactures, and sells premium smart electric vehicles, driving innovations in next-generation technologies in autonomous driving, digital technologies, electric powertrains, and batteries. NIO Autonomous Driving Development Platform (NADP) is an R&D platform dedicated to the core autonomous driving service of NIO.

NIO chose NVIDIA Triton Inference Server because of several key technical and operational reasons, including:

  • NVIDIA Triton supports DAG-based orchestration of numerous models, along with preprocessing or postprocessing modules
  • Cloud-native deployment of NVIDIA Triton enabled multi-GPU, multi-node scaling in a lightweight way
  • High-quality documentation and learning resources helped ease migration to NVIDIA Triton
  • NVIDIA Triton’s stability and robust functionality are necessary for AD use cases 

NIO’s AI inference workflow for autonomous driving

Hundreds of AI models are used to mine data from autonomous vehicles. In a use case like autonomous driving, the inference workflow consists of multiple AI models with preprocessing and postprocessing logic stitched together in a pipeline. 

NIO moved the preprocessing and postprocessing of the pipeline from the client side, which runs on CPUs, to NVIDIA Triton running on GPUs. The NVIDIA Triton’s business logic scripting (BLS) functionality was used to orchestrate the pipeline to run optimally for AD use. 

By moving the preprocessing from CPU to GPU and leveraging efficient pipeline orchestration, NIO achieved 6x latency reduction in some core pipelines, improving the overall throughput by up to 5x. 

Before and after workflow pipelines are shown in Figure 1.

Diagram showing the improved AI inference workflow NIO implemented to achieve latency reduction and increase throughput for an autonomous driving use case.
Figure 1. Comparison of NIO’s AI inference workflows before the introduction of NVIDIA Triton Inference Server (left) and after (right)

Model pipeline orchestration benefits of NVIDIA Triton

This section examines each of the benefits NIO realized by integrating NVIDIA Triton.

GPU-accelerated preprocessing

Preprocessing tasks such as decoding, resizing, and transposing were accelerated on the GPU by NVIDIA Triton using nvJPEG and NVIDIA DALI. This significantly offloaded the computing workload from the client CPU and reduced preprocessing latency.

Upgrading models without the need for client application modification

By moving the preprocessing and postprocessing of the model to NVIDIA Triton, each time the model is upgraded, the client side does not require any modification. This essentially speeds up the rollout of the model, helping it reach production faster.

Using a single GPU node to reduce network data transfer overhead

A unified preprocessing enables multiple copies of the input to be shared with multiple backend recognition models. The process uses GPU shared memory on the server side, without data transfer overhead costs. 

Figure 2 shows the pipeline can connect up to nine models using the NVIDIA Triton business logic scripting functionality.

Workflow diagram that shows how the model pipeline inside the GPU are orchestrated with NVIDIA Triton.
Figure 2. Model pipeline orchestration with the NVIDIA Triton business logic scripting

For an input image of 2 K resolution, the size of each frame is 1920 x 1080 x 3 x 8 = 47 Mb. Assuming a full frame rate of 60 fps, the amount of data input per second is 1920 x 1080 x 3 x 8 x 60 = 2847 Mb. In the previous workflow, each image is sent sequentially to the nine models over the network. Data transferred per second is 1920 x 1080 x 3 x 8 x 60 x 9 = 25 Gb = 3 GB.

In the new workflow, the nine models are orchestrated with the NVIDIA Triton business logic scripting. That means the models can access the image in the GPU shared memory and the images do not have to be sent over the network. Assuming a PCIe bandwidth of 160 Gb = 20 GB per second, theoretically the data generated per second can save 150 ms in data transfer if the data is transferred over PCIe.

Assuming an available bandwidth of 16 Gb = 2 GB per second, theoretically the data generated per second can save 1,500 ms in data transfer if the data is transferred over the network. All these result in speeding up the workflow.

Network transfer savings using image compression 

For accurate model prediction, the input image must be 1920 x 1080 x 3 x 8 bytes in the previous workflow and must be transmitted through the network. After introducing the server-side preprocessing, the original image can be altered to a compressed three-channel 720 pixel image (1280 x 720 x 3) within the allowed range of accuracy loss. 

As a result, it only takes a few hundred KB to transmit the bytes of the compressed image and resize with minimal accuracy loss to 1920 x 1080 x 3 x 8 bytes on the server. This leads to additional network transfer savings, speeding up the workflow. 

Ease of integration in NADP inference platform

NIO’s current inference platform based on NVIDIA Triton is a key component of their Autonomous Driving Development Platform (NADP), used in their autonomous driving solution.

As the NIO platform is built on Kubernetes (K8s), it was imperative for NVIDIA Triton to integrate well with Kubernetes. The components of the workflow are implemented as K8s CRD (native and custom) around NVIDIA Triton.

Flow chart showing how the various components of the ML workflow are connected with each other in a Kubernetes deployment.
Figure 3. NIO’s machine learning workflow in Kubernetes

Continuous Integration/Continuous Delivery (CI/CD)

Argo is the engine used to orchestrate the workflow in Kubernetes. It helps with CI/CD for all the components involved in development, quantification, access, cloud deployment, pressure testing, and launch. NVIDIA Triton helps with CI/CD by triggering the next step in the workflow whenever the models are loaded.

In addition, use of the NVIDIA Triton Docker container helps with consistent functionality across development, test, and deployment environments.

Integrating the Jupyter environment into the NVIDIA Triton image was seamless. Jupyter provides a convenient development environment for debugging in case of a complex problem that requires online debugging or offline reproduction.

Ease of deployment with Istio

NVIDIA Triton natively supports gRPC protocol for communication with applications. However, as the Kubernetes native service cannot offer effective request-level load balancing for gRPC, NVIDIA Triton is integrated with the Istio service mesh. Istio is used to load balance traffic to NVIDIA Triton Inference Server and monitor the health of the service through liveness/readiness probes of NVIDIA Triton.

Ease of use with Apollo configuration management

Apollo Configuration Center is used for model name-based service discovery. Users can access the models without knowing the specific domain name where the model is deployed. Combined with the NVIDIA Triton model repository, users can directly trigger the deployment of models.

Metrics with Prometheus and Grafana

NVIDIA Triton provides a complete set of model service metrics based on model dimensions. For example, NVIDIA Triton can distinguish between inference request queueing time and GPU computation time, enabling fine-grained diagnosis and analysis of online model service performance without entering the debug mode. 

Because NVIDIA Triton supports cloud-native mainstream Prometheus/Grafana, users can easily configure the dashboard and the alarms for each dimension to provide metrics support for high service availability.

Key takeaways

NIO’s optimized workflow that integrates NVIDIA Triton Inference Server resulted in a 6x latency reduction in some core pipelines. This improved overall throughput by up to 5x.

By moving the preprocessing logic to GPU using the NVIDIA Triton pipeline orchestration functionality, NIO achieved:

  • Faster image processing
  • Freed CPU capacity
  • Reduced network transfer overhead
  • Higher inference throughput

NIO achieved AI inference workflow acceleration using NVIDIA Triton Inference Server. NVIDIA Triton was also easy to integrate in a robust Kubernetes-based scalable solution.

Additional resources

Categories
Offsites

Making a Traversable Wormhole with a Quantum Computer

Wormholes — wrinkles in the fabric of spacetime that connect two disparate locations — may seem like the stuff of science fiction. But whether or not they exist in reality, studying these hypothetical objects could be the key to making concrete the tantalizing link between information and matter that has bedeviled physicists for decades.

Surprisingly, a quantum computer is an ideal platform to investigate this connection. The trick is to use a correspondence called AdS/CFT, which establishes an equivalence between a theory that describes gravity and spacetime (and wormholes) in a fictional world with a special geometry (AdS) to a quantum theory that does not contain gravity at all (CFT).

In “Traversable wormhole dynamics on a quantum processor”, published in Nature today, we report on a collaboration with researchers at Caltech, Harvard, MIT, and Fermilab to simulate the CFT on the Google Sycamore processor. By studying this quantum theory on the processor, we are able to leverage the AdS/CFT correspondence to probe the dynamics of a quantum system equivalent to a wormhole in a model of gravity. The Google Sycamore processor is among the first to have the fidelity needed to carry out this experiment.

Background: It from Qubit

The AdS/CFT correspondence was discovered at the end of a series of inquiries arising from the question: What’s the maximum amount of information that can fit in a single region of space? If one asked an engineer how much information could possibly be stored in a datacenter the answer would likely be that it depends on the number and type of memory chips inside it. But surprisingly, what is inside the data center is ultimately irrelevant. If one were to cram more and more memory chips with denser and denser electronics into the datacenter then it will eventually collapse into a black hole and disappear behind an event horizon.

When physicists such as Jacob Bekenstein and Stephen Hawking tried to compute the information content of a black hole, they found to their surprise that it is given by the area of the event horizon — not by the volume of the black hole. It looks as if the information inside the black hole was written on the event horizon. Specifically, a black hole with an event horizon that can be tiled with A tiny units of area (each unit, called a “Planck area,” is 2.6121×10−70 m2) has at most A/4 bits of information. This limit is known as the Bekenstein-Hawking bound.

This discovery that the maximum amount of information that could fit in a region was proportional not to its volume, but to the surface area of the region’s boundary hinted at an intriguing relationship between quantum information and the three-dimensional spatial world of our everyday experience. This relationship has been epitomized by the phrase “It from qubit,” describing how matter (“it”) emerges from quantum information (“qubit”).

While formalizing such a relationship is difficult for ordinary spacetime, recent research has led to remarkable progress with a hypothetical universe with hyperbolic geometry known as “anti-de Sitter space” in which the theory of quantum gravity is more naturally constructed. In anti-de Sitter space, the description of a volume of space with gravity acting in it can be thought of as encoded on the boundary enclosing the volume: every object inside the space has a corresponding description on the boundary and vice versa. This correspondence of information is called the holographic principle, which is a general principle inspired by Bekenstein and Hawking’s observations.

Schematic representation of anti-de Sitter space (interior of cylinder) and its dual representation as quantum information on the boundary (surface of cylinder).

The AdS/CFT correspondence allows physicists to connect objects in space with specific ensembles of interacting qubits on the surface. That is, each region of the boundary encodes (in quantum information) the content of a region in spacetime such that matter at any given location can be “constructed” from the quantum information. This allows quantum processors to work directly with qubits while providing insights into spacetime physics. By carefully defining the parameters of the quantum computer to emulate a given model, we can look at black holes, or even go further and look at two black holes connected to each other — a configuration known as a wormhole, or an Einstein-Rosen bridge.

Experiment: Quantum Gravity in the Lab

Implementing these ideas on a Sycamore processor, we have constructed a quantum system that is dual to a traversable wormhole. Translated from the language of quantum information to spacetime physics via the holographic principle, the experiment let a particle fall into one side of a wormhole and observed it emerging on the other side.

Traversable wormholes were recently shown to be possible by Daniel Jafferis, Ping Gao and Aron Wall. While wormholes have long been a staple of science fiction, there are many possible spacetime geometries in which the formation of a wormhole is possible, but a naïvely constructed one would collapse on a particle traveling through it. The authors showed that a shockwave — i.e., a deformation of spacetime that propagates at the speed of light — of negative energy would solve this problem, propping open the wormhole long enough to enable traversability. The presence of negative energy in a traversable wormhole is similar to negative energy in the Casimir effect, where vacuum energy pushes together closely spaced plates. In both cases, quantum mechanics permits the energy density at a given location in space to be either positive or negative. On the other hand, if the wormhole experienced a shockwave of positive energy, no information would be allowed to pass through.

The simplest application of the holographic principle to create a wormhole requires many, many qubits — in fact, to approach the pencil-and-paper solutions given by theoretical physicists, one would need an arbitrarily large number of qubits. As the number of qubits is reduced, additional corrections are required that are still poorly understood today. New ideas were needed to build a traversable wormhole on a quantum computer with a limited number of qubits.

One of us (Zlokapa) adopted ideas from deep learning to design a small quantum system that preserved key aspects of gravitational physics. Neural networks are trained via backpropagation, a method that optimizes parameters by directly computing the gradient through the layers of the network. To improve the performance of a neural network and prevent it from overfitting to the training dataset, machine learning (ML) practitioners employ a host of techniques. One of these, sparsification, attempts to restrict the detail of information in the network by setting as many weights as possible to zero.

Similarly, to create the wormhole, we started with a large quantum system and treated it like a neural network. Backpropagation updated the parameters of the system in order to maintain gravitational properties while sparsification reduced the size of the system. We applied ML to learn a system that preserved only one key gravitational signature: the importance of using a negative energy shockwave. The training dataset compared dynamics of a particle traversing a wormhole propped open with negative energy and collapsed with positive energy. By ensuring the learned system preserved this asymmetry, we obtained a sparse model consistent with wormhole dynamics.

Learning procedure to produce a sparse quantum system that captures gravitational dynamics. A single coupling consists of all six possible connections between a given group of four fermions.

Working with Jafferis and a handful of collaborators from Caltech, Fermilab, and Harvard, we subjected the new quantum system to numerous tests to determine if it showed gravitational behavior beyond signatures induced by different energy shockwaves. For example, while quantum mechanical effects can transmit information across a quantum system in a diverse set of ways, information that travels in spacetime — including through a wormhole — must be causally consistent. This and other signatures were verified on classical computers, confirming that the dynamics of the quantum system were consistent with a gravitational interpretation as viewed through the dictionary of the holographic principle.

Implementing the traversable wormhole as an experiment on a quantum processor is an extraordinarily delicate process. The microscopic mechanism of information transfer across qubits is highly chaotic: imagine an ink drop swirling in water. As a particle falls into a wormhole, its information gets smeared over the entire quantum system in the holographic picture. For the negative energy shockwave to work, the scrambling of information must follow a particular pattern known as perfect size winding. After the particle hits the negative energy shockwave, the chaotic patterns effectively proceed in reverse: when the particle emerges from the wormhole, it is as if the ink drop has come back together by exactly undoing its original turbulent spread. If, at any point in time, a small error occurs, the chaotic dynamics will not undo themselves, and the particle will not make it through the wormhole.

Left: Quantum circuit describing a traversable wormhole. A maximally entangled pair of qubits (“EPR pair”) are used as an entanglement probe to send a qubit through the wormhole. The qubit is swapped into the left side of the wormhole at time –t0; the energy shockwave is applied at time 0; and the right side of the wormhole is measured at time t1. Right: Photograph of the Google Sycamore quantum processor.

On the Sycamore quantum processor, we measured how much quantum information passed from one side of the system to the other when applying a negative versus a positive energy shockwave. We observed a slight asymmetry between the two energies, showing the key signature of a traversable wormhole. Due to the protocol’s sensitivity to noise, the Sycamore processor’s low error rates were critical to measuring the signal; with even 1.5x the amount of noise, the signal would have been entirely obscured.

Looking Forward

As quantum devices continue to improve, lower error rates and larger chips will allow deeper probes of gravitational phenomena. Unlike experiments such as LIGO that record data about gravity in the world around us, quantum computers provide a tool to explore theories of quantum gravity. We hope that quantum computers will help develop our understanding of future theories of quantum gravity beyond current models.

Gravity is only one example of the unique ability of quantum computers to probe complex physical theories: quantum processors can provide insight into time crystals, quantum chaos, and chemistry. Our work demonstrating wormhole dynamics represents a step towards discovering fundamental physics using quantum processors at Google Quantum AI.

You can also read more about this result here.

Acknowledgements

We would like to thank our Quantum Science Communicator Katherine McCormick for her help writing this blog post.

Categories
Misc

An IT Manager’s Guide to Deploying an Edge AI Solution

Timing is everything, especially when it impacts your customer experiences, bottom line, and production efficiency. Edge AI can help by delivering real-time…

Timing is everything, especially when it impacts your customer experiences, bottom line, and production efficiency. Edge AI can help by delivering real-time intelligence and increased privacy in intermittent, low bandwidth, and low cost environments. 

By 2025, according to Gartner®, 75% of data will be created and processed at the edge, outside the traditional data center or cloud.1 It’s no wonder that thousands of companies are turning to edge AI to drive transformation for their businesses.

As organizations undergo this shift, many IT and business leaders are still in the early stages of planning and executing their edge computing strategies. Because edge AI is a new concept, the process is difficult for many. 

NVIDIA, a leading AI infrastructure company with robust experience helping organizations, customers, and partners successfully deploy edge AI solutions, is no stranger to these new concepts. 

In an effort to help others, the learnings and recommendations from these experiences are presented in An IT Manager’s Guide: How to Successfully Deploy an Edge AI Solution. The whitepaper offers an in-depth look at building and executing a successful edge AI deployment.

This post features recommendations on some key considerations when configuring an edge system. 

Edge system configurations: Design recommendations 

There are many parameters to consider when sizing a system. The optimal PCIe server configuration will depend on the target workload for that server. 

Edge AI models incorporate various workloads into their applications, such as vision AI, natural language processing, recommendations based on industrial sensors, and predictive analytics. 

Table showing general system configuration recommendations for a vision AI workload.
Table 1. General system configuration recommendations for a vision AI workload. Actual recommendations will vary based on workload and use case.

Edge computing sizing considerations

When it comes to designing full hardware and software solutions at the edge, it is important to look at the solution as a whole to understand how the parts work together. Below are some of the individual considerations that IT must evaluate for edge AI deployments.

Number of streams: Each camera feed is a stream requiring a certain amount of memory and compute for processing. Small configurations of 6-7 video processing streams require relatively small systems. Larger deployments may require high performance systems that are typically seen in the data center. 

Application examples: One of the first steps to a successful edge AI deployment is understanding what workload needs to be run to reach your goals. Vision AI applications like image recognition, people or vehicle detection, and segmentation are all common use cases. 

Once an application is determined, it is important to understand the intended scale. For example, are additional AI models needed? Typically, a proof of concept (POC) will consist of a single AI model and use case, but most production deployments ultimately incorporate multiple AI models. The next steps include quantifying the business value of the application, dictating any environmental constraints, and securing stakeholder alignment. 

Memory: Perhaps the most common way to under-resource an edge AI solution is to configure the edge systems with too little memory. Edge AI systems require significantly more memory than other applications to support the parallel execution of the inference engine across the CPU and GPUs. 

The data science team or application vendor who trained the AI will know the memory requirements of the latest model. IT teams should, at a minimum, double that number to accommodate the inevitable expansion of the model as it retrains. This will also provide some headroom for the additional AI models that will need to be deployed alongside the first one.

Another rule of thumb is to provision twice as much system memory as the total GPU memory, and never less than 1.5x the total GPU memory. The memory should be evenly spread across all CPU sockets and memory channels for optimal performance.

Networking: As operations increasingly rely on digital technologies such as edge computing, resilience is key. There are two networks to consider when designing an edge solution: the network between the edge AI location and cloud, and the network between a sensor and the edge AI system. 

Understanding the type of network connectivity of your environment will help in determining the specific networking bandwidth requirements for your use case. For example, for a use case like robotics, where wireless connectivity may not be possible, 5G is the next best choice as it offers minimal congestion and guaranteed service and bandwidth. 

Accelerators: Most edge applications run adequately on single socket x86 or Arm CPUs. But when the edge applications incorporate AI capabilities, they are far more compute intensive. 

To run an inference engine at the edge, the edge hardware needs enough compute power to execute complex neural networks with massively parallel computations. CPUs execute all the independent cells of a neural network sequentially, while discreet accelerators can execute them in parallel. Hence, accelerators are architecturally suited for AI, providing meaningfully better performance. They have become an essential component of modern AI infrastructure.

Among the most effective discrete accelerators for edge AI are Graphics Processing Units (GPUs) and Data Processing Units (DPUs). 

Storage: Naturally, the edge server requires local storage, usually a solid state hard drive, for its operating system, network components, hardware drivers, and application software. Unlike other applications, edge AI solutions typically process a massive amount of unstructured input data such as images, voices, and sensor readings. Depending upon how much of this data needs to be stored, for how long, how securely, and the level of reliability, different storage options are called for.

The first step in determining what storage is necessary for an edge AI solution requires IT teams to think through a data strategy. The data strategy will dictate what and how much data will need to be stored locally or in the cloud, and in turn, guide what storage options are best for that particular solution. Without a proactive strategy, developers often make inconsistent and sub-optimal choices that create problems down the road. 

Security: Security is paramount for edge AI computing devices, as they are deployed in remote locations outside the data center firewalls and the physical protections that limit access to systems. For more details, see Edge Computing: Considerations for Security Architects.

When it comes to an edge AI solution, five areas should be understood and made part of the overall solutions architecture: end-to-end encryption, mutual authentication, physical security, zero trust networking, and real-time monitoring.

Management: A remote management plan is critical for edge environments because systems at the edge are distributed, always on, and often operate in remote settings. See Remotely Operating Systems and Applications at the Edge to learn more.

An edge management solution will have automatic deployment and provisioning capabilities, ongoing management, real-time alerting, auditing, and use modern, cloud-native tools. 

Organizations have the choice of whether to build or buy a management solution. The following are questions to consider: How quickly does a solution need to be set up? Is the appropriate team and expertise available? Does this provide secure management of my edge environment? 

Pillars of a successful edge deployment 

Deploying the infrastructure needed to support a scalable edge AI solution is a big challenge. The process is iterative and time consuming, yet critical to do correctly. Decisions that are made when building an edge AI solution have far-reaching implications that will impact an organization’s business outcomes. 

For more guidance on this topic, download An IT Manager’s Guide: How to Successfully Deploy an Edge AI Solution

References

1. Gartner, “Building an Edge Computing Strategy,” G00753920, September 2021. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Categories
Misc

New Workshop: Data Parallelism: How to Train Deep Learning Models on Multiple GPUs

Learn how to decrease model training time by distributing data to multiple GPUs, while retaining the accuracy of training on a single GPU.

Learn how to decrease model training time by distributing data to multiple GPUs, while retaining the accuracy of training on a single GPU.

Categories
Offsites

Better Language Models Without Massive Compute

In recent years, language models (LMs) have become more prominent in natural language processing (NLP) research and are also becoming increasingly impactful in practice. Scaling up LMs has been shown to improve performance across a range of NLP tasks. For instance, scaling up language models can improve perplexity across seven orders of magnitude of model sizes, and new abilities such as multi-step reasoning have been observed to arise as a result of model scale. However, one of the challenges of continued scaling is that training new, larger models requires great amounts of computational resources. Moreover, new models are often trained from scratch and do not leverage the weights from previously existing models.

In this blog post, we explore two complementary methods for improving existing language models by a large margin without using massive computational resources. First, in “Transcending Scaling Laws with 0.1% Extra Compute”, we introduce UL2R, which is a lightweight second stage of pre-training that uses a mixture-of-denoisers objective. UL2R improves performance across a range of tasks and even unlocks emergent performance on tasks that previously had close to random performance. Second, in “Scaling Instruction-Finetuned Language Models”, we explore fine-tuning a language model on a collection of datasets phrased as instructions, a process we call “Flan”. This approach not only boosts performance, but also improves the usability of the language model to user inputs without engineering of prompts. Finally, we show that Flan and UL2R can be combined as complementary techniques in a model called Flan-U-PaLM 540B, which outperforms the unadapted PaLM 540B model by 10% across a suite of challenging evaluation benchmarks.

UL2R Training

Traditionally, most language models are pre-trained on either a causal language modeling objective that enables the model to predict the next word in a sequence (e.g., GPT-3 or PaLM) or a denoising objective, where the model learns to recover the original sentence from a corrupted sequence of words, (e.g., T5). Although there are some tradeoffs in language modeling objectives in that causal LMs are better at long-form generation and LMs trained on a denoising objective are better for fine-tuning, in prior work we demonstrated that a mixture-of-denoisers objective that includes both objectives results in better performance on both scenarios.

However, pre-training a large language model on a different objective from scratch can be computationally prohibitive. Hence, we propose UL2 Repair (UL2R), an additional stage of continued pre-training with the UL2 objective that only requires a relatively small amount of compute. We apply UL2R to PaLM and call the resulting new language model U-PaLM.

In empirical evaluations, we found that scaling curves improve substantially with only a small amount of UL2 training. For instance, we show that by using UL2R on the intermediate checkpoint of PaLM 540B, we reach the performance of the final PaLM 540B checkpoint while using 2x less compute (or a difference of 4.4 million TPUv4 hours). Naturally, applying UL2R to the final PaLM 540B checkpoint also leads to substantial improvements, as described in the paper.

Compute versus model performance of PaLM 540B and U-PaLM 540B on 26 NLP benchmarks (listed in Table 8 in the paper). U-PaLM 540B continues training PaLM for a very small amount of compute but provides a substantial gain in performance.

Another benefit that we observed from using UL2R is that on some tasks, performance is much better than models trained purely on the causal language modeling objective. For instance, there are many BIG-Bench tasks that have been described as “emergent abilities”, i.e., abilities that can only be observed in sufficiently large language models. Although the way that emergent abilities are most commonly found is by scaling up the size of the LM, we found that UL2R can actually elicit emergent abilities without increasing the scale of the LM.

For instance, in the Navigate task from BIG-Bench, which measures the model’s ability to perform state tracking, all models except U-PaLM with less than 1023 training FLOPs achieve approximately random performance. U-PaLM performance is more than 10 points above that. Another example of this is the Snarks task from BIG-Bench, which measures the model’s ability to detect sarcasm. Again, whereas all models less than 1024 training FLOPs achieve approximately random performance, U-PaLM achieves well above even for the 8B and 62B models.

For two abilities from BIG-Bench that demonstrate emergent task performance, U-PaLM achieves emergence at a smaller model size due to its use of the UL2R objective.

Instruction Fine-Tuning

In our second paper, we explore instruction fine-tuning, which involves fine-tuning LMs on a collection of NLP datasets phrased as instructions. In prior work, we applied instruction fine-tuning to a 137B-parameter model on 62 NLP tasks, such as answering a trivia question, classifying the sentiment of a movie, or translating a sentence to Spanish.

In this work we fine-tune a 540B parameter language model on more than 1.8K tasks. Moreover, whereas previous efforts only fine-tuned a LM with few-shot exemplars (e.g., MetaICL) or zero-shot without exemplars (e.g., FLAN, T0), we fine-tune on a combination of both. We also include chain of thought fine-tuning data, which enables the model to perform multi-step reasoning. We call our improved methodology “Flan”, for fine-tuning language models. Notably, even with fine-tuning on 1.8K tasks, Flan only uses a small portion of compute compared to pre-training (e.g., for PaLM 540B, Flan only requires 0.2% of the pre-training compute).

We fine-tune language models on 1.8K tasks phrased as instructions, and evaluate them on unseen tasks, which are not included in fine-tuning. We fine-tune both with and without exemplars (i.e., zero-shot and few-shot) and with and without chain of thought, enabling generalization across a range of evaluation scenarios.

In the paper, we instruction–fine-tune LMs of a range of sizes to investigate the joint effect of scaling both the size of the LM and the number of fine-tuning tasks. For instance, for the PaLM class of LMs, which includes models of 8B, 62B, and 540B parameters. We evaluate our models on four challenging benchmark evaluation suites (MMLU, BBH, TyDiQA, and MGSM), and find that both scaling the number of parameters and number of fine-tuning tasks improves performance on unseen tasks.

Both scaling up to a 540B parameter model and using 1.8K fine-tuning tasks improves the performance on unseen tasks. The y-axis is the normalized average over four evaluation suites (MMLU, BBH, TyDiQA, and MGSM).

In addition to better performance, instruction fine-tuning a LM enables it to respond to user instructions at inference time, without few-shot exemplars or prompt engineering. This makes LMs more user-friendly across a range of inputs. For instance, LMs without instruction fine-tuning can sometimes repeat the input or fail to follow instructions, but instruction fine-tuning mitigates such errors.

Our instruction–fine-tuned language model, Flan-PaLM, responds better to instructions compared to the PaLM model without instruction fine-tuning.

Putting Them Together

Finally, we show that UL2R and Flan can be combined to train the Flan-U-PaLM model. Since Flan uses new data from NLP tasks and enables zero-shot instruction following, we apply Flan as the second method after UL2R. We again evaluate on the four benchmark suites, and find that the Flan-U-PaLM model outperforms PaLM models with just UL2R (U-PaLM) or just Flan (Flan-PaLM). Further, Flan-U-PaLM achieves a new state-of-the-art on the MMLU benchmark with a score of 75.4% when combined with chain of thought and self-consistency.

Combining UL2R and Flan (Flan-U-PaLM) leads to the best performance compared to just using UL2R (U-PaLM) or just Flan (Flan-U-PaLM). Performance is the normalized average over four evaluation suites (MMLU, BBH, TyDiQA, and MGSM).

<!–

Average performance on four challenging evaluation suites
PaLM 49.1%
U-PaLM 50.2%
Flan-PaLM 58.4%
Flan-U-PaLM 59.1%

Combining UL2R and Flan (Flan-U-PaLM) leads to the best performance compared to just using UL2R (U-PaLM) or just Flan (Flan-U-PaLM). Performance is the normalized average over four evaluation suites (MMLU, BBH, TyDiQA, and MGSM).

–>

Overall, UL2R and Flan are two complementary methods for improving pre-trained language models. UL2R adapts the LM to a mixture-of-denoisers objective using the same data, whereas Flan leverages training data from over 1.8K NLP tasks to teach the model to follow instructions. As LMs become even larger, techniques such as UL2R and Flan that improve general performance without large amounts of compute may become increasingly attractive.

Acknowledgements

It was a privilege to collaborate on these two papers with Hyung Won Chung, Vinh Q. Tran, David R. So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, Denny Zhou, Donald Metzler, Slav Petrov, Neil Houlsby, Quoc V. Le, Mostafa Dehghani, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Ed H. Chi, Jeff Dean, Jacob Devlin, and Adam Roberts.

Categories
Misc

Explainer: What Is a Pod? What Is a Cluster?

Our digital lives run on collections of computers tightly linked on high-speed networks, and the latest one is an AI supercomputer called NVIDIA DGX SuperPOD.

Our digital lives run on collections of computers tightly linked on high-speed networks, and the latest one is an AI supercomputer called NVIDIA DGX SuperPOD.