Categories
Misc

So, So Fresh: Play the Newest Games in the Cloud on Day One

It’s a party this GFN Thursday with several newly launched titles streaming on GeForce NOW. Revel in gaming goodness with Xenonauts 2, Viewfinder and Techtonica, among the four new games joining the cloud this week. Portal fans, stay tuned — the Portal: Prelude RTX mod will be streaming on GeForce NOW to members soon. Plus, Read article >

Categories
Misc

OCI Accelerates HPC, AI, and Database Using RoCE and NVIDIA ConnectX

Oracle is one of the top cloud service providers in the world, supporting over 22,000 customers and reporting revenue of nearly $4 billion per quarter and…

Oracle is one of the top cloud service providers in the world, supporting over 22,000 customers and reporting revenue of nearly $4 billion per quarter and annual growth of greater than 40%. Oracle Cloud Infrastructure (OCI) is growing at an even faster rate and offers a complete cloud infrastructure for every workload. 

Having added 11 regions in the last 18 months, OCI currently offers 41 regions and supports hosted, on-premises, hybrid, and multi-cloud deployments. It enables customers to run a mix of custom-built, third-party ISVs and Oracle applications on a scalable architecture. OCI provides scalable networking and tools to support security, observability, compliance, and cost management. 

One of the differentiators of OCI is its ability to offer high-performance computing (HPC), Oracle Exadata and Autonomous Database, and GPU-powered applications such as AI and machine learning (ML), with fast infrastructure-as-a-service (IaaS) performance that rivals dedicated on-premises infrastructure. A key component to delivering this high performance is a scalable, low-latency network that supports remote direct memory access (RDMA). For more details, see First Principles: Building a High-Performance Network in the Public Cloud.

Networking challenge of HPC and GPU-powered compute 

A commonality across HPC applications, GPU-powered AI workloads, and the Oracle Autonomous Database on Exadata is that they all run as distributed workloads. Data processing occurs simultaneously on multiple nodes, using a few dozen to thousands of CPUs and GPUs. These nodes must communicate with each other, share intermediate results in multi-stage problem solving with gigabytes to petabytes of storage to access common data, and often assemble the results of distributed computing into a cohesive solution. 

These applications require high throughput and low latency to communicate across nodes to solve problems quickly. Amdahl’s law states that the speedup from parallelizing a task is limited by how much of the task is inherently serial and cannot be parallelized. The amount of time needed to transfer information between nodes adds inherently serial time to the task because nodes must wait for the data transfer to complete and for the slowest node in the task to finish before starting the next parallelizable part of the job. 

For this reason, the performance of the cluster network becomes paramount, and an optimized network can enable a distributed compute cluster to deliver results much sooner than the same computing resources running on a slower network. This time-saving speeds job completion and reduces costs. 

What is RDMA? 

RDMA is remote direct memory access, the most efficient means of transferring data between different machines. It enables a server or storage appliance to communicate and share data over a network without making extra copies and without interrupting the host CPU. It is used for AI, big data, and other distributed technical computing workloads. 

Traditional networking interrupts the CPU multiple times and makes multiple copies of the data being transmitted as it passes from the application through the OS kernel, to the adapter, then back up the stack on the receiving end. RDMA uses only one copy of the data on each end and typically bypasses the kernel, placing data directly in the receiving machine’s memory without interrupting the CPU. 

This process enables lower latency and higher throughput on the network and lower CPU utilization for the servers and storage systems. Today, the majority of HPC, technical computing, and AI applications can be accelerated by RDMA. For more details, see How RDMA Became the Fuel for Fast Networks.

What is InfiniBand? 

InfiniBand is a lossless network optimized for HPC, AI, big data, and other distributed technical computing workloads. It typically supports the highest bandwidth available (currently 400 Gbps per connection) for data center networks and RDMA, enabling machines to communicate and share data without interrupting the host CPU. 

InfiniBand adapters offload networking and data movement tasks from the CPU and feature an optimized, efficient networking stack, enabling CPUs, GPUs, and storage to move data rapidly and efficiently. The InfiniBand adapters and switches can also perform specific compute and data aggregation tasks in the network, mostly oriented around message passing interface (MPI) collective operations. 

This in-network computing speeds up distributed applications, enabling faster problem solving. It also frees up server CPU cores and improves energy efficiency. InfiniBand can also automatically balance traffic loads and reroute connections around broken links. 

As a result, many computing clusters that are dedicated to AI, HPC, big data, or other scientific computing run on an InfiniBand network to provide the highest possible performance and efficiency. When distributed computing performance is the top priority, and the adoption of a specialized stack of network adapters, switches, and management is acceptable, InfiniBand is the network of choice. But a data center might choose to run Ethernet instead of InfiniBand, for other reasons.

What is RoCE? 

RDMA over Converged Ethernet (RoCE) is an open standard enabling remote direct memory access and network offloads over an Ethernet network. The current and most popular implementation is RoCEv2. It uses an InfiniBand communication layer running on top of UDP (Layer 4) and IP (Layer 3), which runs on top of high-speed Ethernet (Layer 2) connections. 

It also supports remote direct memory access, zero-copy data transfers, and bypassing the CPU when moving data. Using the IP protocol on Ethernet enables RoCEv2 to be routable over standard Ethernet networks. RoCE brings many of the advantages of InfiniBand to Ethernet networks. RoCEv2 runs the InfiniBand transport layer over UDP and IP protocols on an Ethernet network. iWARP ran the iWARP protocol on top of the TCP protocol on an Ethernet network but failed to gain popular adoption because of performance and implementation challenges (Figure 1).

A graphic depicting the OSI transport layer mapped across the RDMA software stack for InfiniBand and Ethernet.
Figure 1. NVIDIA InfiniBand runs the InfiniBand transport layer over an InfiniBand network

How do RoCE networks address scalability?

RoCE operates most efficiently on networks with very low levels of packet loss. Traditionally, small RoCE networks use priority flow control (PFC), based on the IEEE 802.1Qbb specification, to make the network lossless. If any destination is too busy to process all incoming traffic, it sends a pause frame to the next upstream switch port, and that switch holds traffic for the time specified in the pause frame. 

If needed, the switch can also send a pause frame up to the next switch in the fabric and eventually onto the originator of the traffic flow. This flow control avoids having the port buffers overflow on any host or switch and prevents packet loss. You can manage up to eight traffic classes with PFC, each class having its own flows and pauses separate from the others. 

However, PFC has some limitations. It operates only at the Layer 2 (Ethernet) level of the Open System Interconnection (OSI) 7-layer model, so it cannot work across different subnets. While a subnet can have thousands or millions of nodes, a typical subnet is limited to 254 IP addresses and consists of a few racks (often one rack) within a data center, which does not scale for large distributed applications. 

PFC operates on a coarse-grained port level and cannot distinguish between flows sharing that port. Also, if you use PFC in a multi-level switch fabric, congestion at one destination switch for one flow can spread to multiple switches and block unrelated traffic flows that share one port with the congested flow. The solution is usually to implement a form of congestion control.

Congestion management for large RoCE networks

The TCP protocol includes support for congestion management based on dropped packets. When an endpoint or switch is overwhelmed by a traffic flow, it drops some packets. When the sender fails to receive an acknowledgment from the transmitted data, the sender assumes the packet was lost because of network congestion, slows its rate of transmission, and retransmits the presumably lost data. 

This congestion management scheme does not work well for RDMA on Ethernet (and therefore for RoCE). RoCE does not use TCP and  the process of waiting for packets to timeout and then retransmitting the lost data introduces too much latency—and too much variability in latency or jitter—for efficient RoCE operation. 

Large RoCE networks often implement a more proactive congestion control mechanism known as explicit congestion notification (ECN), in which the switches mark packets if congestion occurs in the network. The marked packets alert the receiver that congestion is imminent, and the receiver alerts the sender with a congestion notification packet or CNP. After receiving the CNP, the sender knows to back off, slowing down the transmission rate temporarily until the flow path is ready to handle a higher rate of traffic. 

Congestion control works across Layer 3 of the OSI model, so it functions across different subnets and scales up to thousands of nodes. However, it requires setting changes to both switches and adapters supporting RoCE traffic. Implementation details of when switches mark packets for congestion, how quickly senders back off sending data, and how aggressively senders resume high-speed transmissions are all critical to determining the scalability and performance of the RoCE network.

A graphic depicting ECN workflow mapping the progress of experiencing congestion, marketing the packet within the switch, and notifying the sender.
Figure 2. ECN marks outgoing packets as CE–Congestion Experienced when the switch queue is becoming full. The flow recipient receives the packet and notifies the sender to slow transmission

Other Ethernet-based congestion control algorithms include quantized congestion notification (QCN) and data center TCP (DCTCP). In QCN, switches notify flow senders directly with the level of potential congestion, but the mechanism functions only over L2. Consequently, it cannot work across more than one subnet. DCTCP uses the sender’s network interface card (NIC) to measure the round-trip time (RTT) of special packets to estimate how much congestion exists and how much the sender must slow down data transmissions. 

But DCTCP lacks a fast start option to quickly start or resume sending data when no congestion exists, places a heavy load on host CPUs, and does not have a good mechanism for the receiver to communicate with the sender. In any case, DCTCP requires TCP, so it does not work with RoCE. 

Smaller RoCE networks using newer RDMA-capable ConnectX SmartNICs from NVIDIA, or newer NVIDIA BlueField DPUs, can use Zero Touch RoCE (ZTR). ZTR enables excellent RoCE performance without setting up PFC or ECN on the switch, which greatly simplifies network setup. However, initial deployments of ZTR have been limited to small RoCE network clusters, and a more scalable version of ZTR that uses RTT for congestion notification is still in the proving stages. 

How OCI implements a scalable RoCE network 

OCI determined that certain cloud workloads required RDMA for maximum performance. These include AI, HPC, Exadata, autonomous databases, and other GPU-powered applications. Out of the two standardized RDMA options on Ethernet, they chose RoCE for its performance and wider adoption. 

The RoCE implementation needed to scale to run across clusters containing thousands of nodes and deliver consistently low latency to ensure an excellent experience for cloud customers. 

After substantial research, testing, and careful design, OCI decided to customize their own congestion control solution based on the data center quantized congestion notification (DC-QCN) algorithm, which they optimized for different RoCE-accelerated application workloads. The OCI DC-QCN solution is based on ECN with minimal use of PFC.

A graph shows how the OCI network uses RoCE with priority flow control at the link level, explicit congestion notification for unidirectional congestion signaling and data center quantized congestion notification for end-to-end congestion control.
Figure 3. The OCI RoCE network uses ECN across the network fabric plus a limited amount of unidirectional PFC only between the hosts and ToR switches

A separate network for RoCE 

OCI built a separate network for RoCE traffic because the needs of the RDMA network tend to differ from the regular data center network. The different types of application traffic, congestion control, and routing protocols each prefer to have their own queues. Each NIC typically supports only eight traffic classes, and the NIC and switch configuration settings and firmware might be different for RDMA from non-RDMA workloads. For these reasons, having a separate Ethernet network for RoCE traffic and RoCE-accelerated applications makes sense. 

Limited use of PFC at the edge

OCI implemented a limited level of PFC, only unidirectionally at the network edge. Endpoints can ask the top-of-rack (ToR) switch to pause transmission if their NICs buffer fills up. However, the ToR switches never ask the endpoints to pause and do not pass ‌pause requests up the network to leaf or spine switches. This process prevents head-of-line blocking and congestion spreading if the incoming traffic flow rate temporarily exceeds the receiver’s ability to process and buffer data. 

The ECN mechanism ensures that PFC is very rarely needed. In the rare case that a receiving node’s NIC buffer is temporarily overrun while the ECN feedback mechanism is activating, PFC enables the receiving node to briefly pause the incoming data flow until the sender receives the CNPs and slows its transmission rate. 

In this sense, you can use PFC as a last resort safeguard to prevent buffer overrun and packet loss at the network edge (at the endpoints). OCI envisions that with the next generation of ConnectX SmartNICs, you might not need PFC, even at the edge of the network. 

Multiple classes of congestion control 

OCI determined that they need at least three customized congestion control profiles within DC-QCN for different workloads. Even within the world of distributed applications that require RDMA networking, the needs vary across the following categories:

  • Latency sensitive, requiring consistently low latency throughput
  • Sensitive, high throughput
  • Mixed, requires a balance of low-latency and high throughput

The primary setting for customizing congestion control is the probability P (ranging from 0 to 1) of the switch adding the ECN marking to an outgoing packet, based on queue thresholds Kmin and Kmax. P starts at 0 when the switch queue is not busy, which means it has no chance of congestion. 

When the port queue reaches Kmin, the value P rises above 0, increasing the chance that any packet is marked with ECN. When the queue fills to value Kmax, P is set to Pmax (typically 1), meaning every outgoing packet of that flow on that switch is marked with ECN. Different DC-QCN profiles typically have a no-congestion range where P is 0, a potential congestion range where P is between 0 and 1, and a congestion range where P is 1. 

A more aggressive set of thresholds has lower values for Pmin and Pmax, resulting in earlier ECN packet marking and lower latency but possibly also lower maximum throughput. A relaxed set of thresholds has higher values for Pmin and Pmax, marking fewer packets with ECN, resulting in some higher latencies but also higher throughput. 

To the right side of Figure 4 are three examples of OCI workloads: HPC, Oracle Autonomous DataBase and Exadata Cloud Service, and GPU workloads. These services use different RoCE congestion control profiles. HPC workloads are latency-sensitive and give up some throughput to guarantee lower latency. ‌Consequently, Kmin and Kmax are identical and low (aggressive), and at a low amount of queuing, they mark 100% of all packets with ECN. 

Most GPU workloads are more forgiving on latency but need maximum throughput. The DC-QCN profile gradually marks more packets as buffers ramp from Kmin to Kmax and sets those values relatively higher to enable switch buffers to get closer to full before signaling to flow endpoints that they slow down. 

For Autonomous Database and Exadata Cloud Service workloads, the required balance of latency and bandwidth is in between. Marking or increasing P value gradually increases between Kmin and Kmax, but these values are set at lower threshold values than for GPU workloads.  

A graphic showing four line graphs comparing how optimal network congestion control is achieved by varying data center quantized congestion notification thresholds on different services.
Figure 4. OCI sets DC-QCN to use different Kmin and Kmax thresholds for ECN packet marking, resulting in optimized network behavior on their RoCE network for different workloads

With these settings, HPC flows get 100% ECN packet marking as soon as the queues hit the Kmin level (which is the same here as Kmax) for early and aggressive congestion control engagement. Oracle Autonomous Database and Exadata flows see moderately early ECN marking, but only a portion of packets is marked until buffers reach the Kmax level. 

Other GPU workloads have a higher Kmin setting so ECN marking does not begin until switch queues are relatively fuller, and 100% ECN marking only happens when the queues are close to full. Different workloads get the customized congestion control settings needed to provide the ideal balance of latency and throughput for maximum application performance. 

Leveraging advanced network hardware

An important factor in achieving high performance for RoCE networks is the type of network card used. The NIC offloads the networking stack, including RDMA, to a specialized chip to offload the work from the CPUs and GPUs. OCI uses ConnectX SmartNICs, which have market-leading network performance for both TCP and RoCE traffic. 

These SmartNICs also support rapid PFC and ECN reaction times for detecting ECN-marked packets or PFC pause frames, sending CNPs, and adjusting the data transmission rates downward and upward in response to congestion notifications. 

NVIDIA has been a long time leader in the development and support of RDMA, PFC, ECN, and DC-QCN technology, and a leader in high-performance GPUs and GPU connectivity. The advanced RoCE offloads in ConnectX enable higher throughput and lower latency on the OCI network, and their rapid, hardware-based ECN reaction times help ensure that DC-QCN functions smoothly.

By implementing an optimized congestion control scheme on a dedicated RoCE network, plus a combination of localized PFC, multiple congestion control profiles, and NVIDIA network adapters, OCI has built a very scalable cluster network. It’s ideal for distributed workloads, such as AI and ML, HPC, and Oracle Autonomous Database, and delivers high throughput and low-latency performance close to what an InfiniBand network can achieve.  

Emphasizing data locality

With optimizing cluster network performance, OCI also manages data locality to minimize latency. With the large size of RoCE-connected clusters that often span multiple data center racks and halls, even in an era of 100-, 200-, and 400-Gbps networking connections, the speed of light has not changed, and longer cables result in higher latency. 

Connections to different halls in the data center traverse more switches, and each switch hop adds some nanoseconds to connection latency. OCI shares server locality information with both its customers and the job scheduler, so they can schedule jobs to use servers and GPUs that are close to each other in the network. 

For example, the NVIDIA Collective Communication Library (NCCL) understands the OCI network topology and server locality information and can schedule GPU work accordingly. So, the compute and storage connections traverse fewer switch hops and shorter cable lengths, to reduce the average latency within the cluster. 

It also sends less traffic to spine switches, simplifying traffic routing and load-balancing decisions. OCI also worked with its switch vendors to make the switches more load-aware, so flows can be routed to less-busy connections. Each switch generally has two connections up and down the network, enabling multiple datapaths for any flow. 

Conclusion

By investing in a dedicated RoCE network with an optimized implementation of DC-QCN, advanced ConnectX NICs, and customized congestion control profiles, OCI delivers a highly scalable cluster that supports accelerated computing for many different workloads and applications. OCI cluster networks simultaneously deliver high throughput and low latency. For small clusters, latency-half the round trip time-can be as little as 2 microseconds. For large clusters, latency is typically under 4 microseconds. For extremely large superclusters, latencies are in the range of 4-8 microseconds, with most traffic seeing latencies at the lower end of this range. 

Oracle Cluster Infrastructure uses an innovative approach to deliver scalable, RDMA-powered networking on Ethernet for a multitude of distributed workloads, providing higher performance and value to their customers. 

For more information, see the following resources:

Categories
Misc

Programming the Quantum-Classical Supercomputer

Heterogeneous computing architectures—those that incorporate a variety of processor types working in tandem—have proven extremely valuable in the continued…

Heterogeneous computing architectures—those that incorporate a variety of processor types working in tandem—have proven extremely valuable in the continued scalability of computational workloads in AI, machine learning (ML), quantum physics, and general data science. 

Critical to this development has been the ability to abstract away the heterogeneous architecture and promote a framework that makes designing and implementing such applications more efficient. The most well-known programming model that accomplishes this is CUDA Toolkit, which enables offloading work to thousands of GPU cores in parallel following a single-instruction, multiple-data model. 

Recently, a new form of node-level coprocessor technology has been attracting the attention of the computational science community: the quantum computer, which relies on the non-intuitive laws of quantum physics to process information using principles such as superposition, entanglement, and interference. This unique accelerator technology may prove useful in very specific applications and is poised to work in tandem with CPUs and GPUs, ushering in an era of computational advances previously deemed unfeasible. 

The question then becomes: If you enhance an existing classically heterogeneous compute architecture with quantum coprocessors, how would you program it in a manner fit for computational scalability?

NVIDIA is answering this question with CUDA Quantum, an open-source programming model extending both C++ and Python with quantum kernels intended for compilation and execution on quantum hardware. 

This post introduces CUDA Quantum, highlights its unique features, and demonstrates how researchers can leverage it to gather momentum in day-to-day quantum algorithmic research and development. 

CUDA Quantum: Hello quantum world 

To begin with a look at the CUDA Quantum programming model, create a two-qubit GHZ state with the Pythonic interface. This will accustom you to its syntax.

import cudaq

# Create the CUDA Quantum Kernel
kernel = cudaq.make_kernel()

# Allocate 2 qubits
qubits = kernel.qalloc(2)

# Prepare the bell state
kernel.h(qubits[0]) 
kernel.cx(qubits[0], qubits[1])

# Sample the final state generated by the kernel 
result = cudaq.sample(kernel, shots_count = 1000) 

print(result) 

{11:487, 00:513}

The language specification borrows concepts that CUDA has proven successful; specifically, the separation of host and device code at the function boundary level. The code snippet below demonstrates this functionality on a GHZ state preparation example in C++. 

#include 

int main() {
      // Define the CUDA Quantum kernel as a C++ lambda
	auto ghz =[](int numQubits) __qpu__ {
           // Allocate a vector of qubits
		cudaq::qvector q(numQubits);

           // Prepare the GHZ state, leverage standard 
           // control flow, specify the x operation 
           // is controlled. 
		h(q[0]);
		for (int i = 0; i (q[i], q[i + 1]);
	};

     // Sample the final state generated by the kernel
auto results = cudaq::sample(ghz, 15); 
results.dump();

return 0;
}

CUDA Quantum enables the definition of quantum code as stand-alone kernel expressions. These expressions can be any callable in C++ (a lambda is shown here, and implicitly typed callable) but must be annotated with the __qpu__ attribute enabling the nvq++ compiler to compile them separately. Kernel expressions can take classical input by value (here the number of qubits) and leverage standard C++ control flow, for example for loops and if statements. 

The utility of GPUs

The experimental efforts to scale up QPUs and move them out of research labs and host them on the cloud for general access have been phenomenal. However, current QPUs are noisy and small-scale, hindering advancement of algorithmic research. To aid this, circuit simulation techniques are answering the pressing requirements to advance research frontiers. 

Desktop CPUs can simulate small-scale qubit statistics; however, memory requirements of the state vector grow exponentially with the number of qubits. A typical desktop computer possesses eight GB of RAM, enabling one to sluggishly simulate approximately 15 qubits. The  latest NVIDIA DGX H100 enables you to surpass the 35-qubit mark with unparalleled speed. 

Figure 1 shows a comparison of CUDA Quantum on CPU and GPU backend for a typical variational algorithmic workflow. The need for GPUs is evident here, as the speedup at 14 qubits is 425x and increases with qubit count. Extrapolating to 30 qubits, the CPU-to-GPU runtime is 13 years, compared to 2 days. This unlocks researchers’ abilities to go beyond small-scale proof of concept results to implementing algorithms closer to real-world applications.

Bar graph showing performance improvements in execution time between a CPU and GPU as a function of number of qubits. At 14 qubits, the GPU is 425 times faster than the CPU.
Figure 1. Performance comparison between CPU and GPU for a typical quantum neural network workflow as a function of qubit count 

Along with CUDA Quantum, NVIDA has developed cuQuantum, a library enabling lightning-fast simulation of a quantum computer using both state vector and tensor network methods through hand-optimized CUDA kernels. Memory allocation and processing happens entirely on GPUs resulting in dramatic increases in performance and scale. CUDA Quantum in combination with cuQuantum forms a powerful platform for hybrid algorithm research. 

Figure 2 compares CUDA Quantum with a leading quantum computing SDK, both leveraging the NVIDIA cuQuantum backend to optimally offload circuit simulation onto NVIDIA GPUs. In this case, the benefits of using CUDA Quantum are isolated and yield a 5x performance improvement on average compared to a leading framework. 

 Line plot showing the execution time for a typical quantum neural network workflow as a function of number of qubits for CUDA Quantum and a leading framework. CUDA Quantum is on average 5x faster. Since both frameworks were executed on GPUs, we are isolating the performance benefits of using CUDA Quantum.
Figure 2. GPU-to-GPU comparison between CUDA Quantum and a leading framework, both offloading circuit simulation to NVIDIA GPUs, with CUDA Quantum on average 5x faster

Enabling multi-QPU workflows of the future

CUDA Quantum is not limited to consideration of current cloud-based quantum execution models, but is fully anticipating tightly coupled, system-level quantum acceleration. Moreover, CUDA Quantum enables application developers to envision workflows for multi-QPU architectures with multi-GPU backends. 

For the preceding quantum neural network (QNN) example, you can use the multi-GPU functionality to run a forward pass of the dataset enabling us to perform multi-QPU workflows of the future. Figure 3 shows results for distributing the QNN workflow across two GPUs and demonstrates strong scaling performance indicating effective usage of all GPU compute resources. Using two GPUs makes the overall workflow twice as fast compared to a single GPU, demonstrating strong scaling. 

Line plot showing the execution time of a typical quantum neural network workflow as a function of the number of qubits. The execution time is approximately half when two GPUs are used in comparison to a single GPU.
Figure 3. ‌Results for distributing the QNN forward pass workload to multiple QPUs enabled ‌by the multi-GPU backend

Another common workflow that benefits from multi-QPU parallelization is the Variational Quantum Eigensolver (VQE). This requires the expectation value of a composite Hamiltonian made up of multiple single Pauli tensor product terms. The CUDA Quantum observe call, shown below, automatically batches terms (Figure 4), and offloads to multiple GPUs or QPUs if available, demonstrating strong scaling (Figure 5). 

numQubits, numTerms = 30, 1e5
hamiltonian = cudaq.SpinOperator.random(numQubits, numTerms)
cudaq.observe(ansatz, hamiltonian, parameters)
Image showing a Hamiltonian composed of many terms being batched into four groups and offloaded to four GPUs.
Figure 4. Automatic batching of Hamiltonian terms across multiple NVIDIA A100 GPUs
Bar graph showing speedup in execution time gained by automatically batching a Hamiltonian composed of multiple terms into four batches and executing on four GPUs. The speedups gained demonstrate strong scaling.
Figure 5. Speedups gained due to an optimized software stack supporting the hardware available to the user, GPUs or QPUs

GPU-QPU workflows 

This post has so far explored using GPUs for scaling quantum circuit simulation beyond what is possible on CPUs, as well as multi-QPU workflows. The following sections dive into true heterogeneous computing with a hybrid quantum neural network example using PyTorch and CUDA Quantum.

As shown in Figure 6, a hybrid quantum neural network encompasses a quantum circuit as a layer within the overall neural network architecture. An active area of research, this is poised to be advantageous in certain areas, improving generalization errors.

Image showing layers of neural network nodes, the output of which acts as the input to a quantum circuit, which is measured to generate the loss function. This workflow enables one to integrate PyTorch layers with CUDA Quantum.
Figure 6. Hybrid quantum neural network architecture accelerated by GPUs made possible by CUDA Quantum

Evidently, it is advantageous to run the classical neural network layers on GPUs and the quantum circuits on QPUs. Accelerating the whole workflow with CUDA Quantum is made possible by setting the following: 

quantum_device = cudaq.set_target('ion-trap')
classical_device = torch.cuda.set_device(gpu0)

The utility of this is profound. CUDA Quantum enables offloading relevant kernels suited for QPUs and GPUs in a tightly integrated, seamless fashion. In addition to hybrid applications, workflows involving error correction, real-time optimal control, and error mitigation through Clifford data regression would all benefit from tightly coupled compute architectures. 

QPU hardware providers 

The foundational information unit embedded within the CUDA Quantum programming paradigm is the qudit, which represents a quantum bit capable of accessing d-states. Qubit is a specific instance where d=2. By using qudits, CUDA Quantum can efficiently target diverse quantum computing architectures, including superconducting circuits, ion traps, neutral atoms, diamond-based, photonic systems, and more. 

You can conveniently develop workflows, and the nvq++ compiler automatically compiles and executes the program on the designated architecture. Figure 7 shows the compilation speedups that the novel compiler yields. Compilation involves circuit optimization, decomposing into the native gate sets supported by the hardware and qubit routing. The nvq++ compiler used by CUDA Quantum is on average 2.4x faster compared to its competition.

Line graph showing how the compilation time scales with number of qubits for CUDA Quantum and a leading framework. The novel ‌compiler used by CUDA Quantum is on average 2.4x faster and its rate of increase (gradient) is also much shallower in comparison.
Figure 7. Compilation time scaling with the number of qubits for CUDA Quantum and a leading framework

To accommodate the desired backend, you can simply modify the set_target() flag. Figure 8 shows an example of how you can seamlessly switch between the simulated backend and the Quantinuum H1 ion trap system. The top shows the syntax to set the desired backend in Python and the bottom in C++. 

Image showing a heatmap of the cost landscape generated by a VQE workflow being executed on cuQuantum simulated backed and the Quantinuum H1 processor. The ease with which users can change the backend and the syntax enabling this in Python and C++ is highlighted.
Figure 8. VQE landscape plots demonstrating execution on simulated or QPU hardware

Getting started with CUDA Quantum

This post has just briefly touched on some of the features of the CUDA Quantum programming model. Reach out to the CUDA Quantum community on GitHub and get started with some example code snippets. We are excited to see the research CUDA Quantum enables for you. 

Categories
Misc

Sailing Seas of Data: Startup Charts Autonomous Oceanic Monitoring

Saildrone is making a splash in autonomous oceanic monitoring. The startup’s nautical data collection technology has tracked hurricanes up close in the North Atlantic, discovered a 3,200-foot underwater mountain in the Pacific Ocean and begun to help map the entirety of the world’s ocean floor. Based in the San Francisco Bay Area, the company develops Read article >

Categories
Offsites

SimPer: Simple self-supervised learning of periodic targets

Learning from periodic data (signals that repeat, such as a heart beat or the daily temperature changes on Earth’s surface) is crucial for many real-world applications, from monitoring weather systems to detecting vital signs. For example, in the environmental remote sensing domain, periodic learning is often needed to enable nowcasting of environmental changes, such as precipitation patterns or land surface temperature. In the health domain, learning from video measurement has shown to extract (quasi-)periodic vital signs such as atrial fibrillation and sleep apnea episodes.

Approaches like RepNet highlight the importance of these types of tasks, and present a solution that recognizes repetitive activities within a single video. However, these are supervised approaches that require a significant amount of data to capture repetitive activities, all labeled to indicate the number of times an action was repeated. Labeling such data is often challenging and resource-intensive, requiring researchers to manually capture gold-standard temporal measurements that are synchronized with the modality of interest (e.g., video or satellite imagery).

Alternatively, self-supervised learning (SSL) methods (e.g., SimCLR and MoCo v2), which leverage a large amount of unlabeled data to learn representations that capture periodic or quasi-periodic temporal dynamics, have demonstrated success in solving classification tasks. However, they overlook the intrinsic periodicity (i.e., the ability to identify if a frame is part of a periodic process) in data and fail to learn robust representations that capture periodic or frequency attributes. This is because periodic learning exhibits characteristics that are distinct from prevailing learning tasks.

Feature similarity is different in the context of periodic representations as compared to static features (e.g., images). For example, videos that are offset by short time delays or are reversed should be similar to the original sample, whereas videos that have been upsampled or downsampled by a factor x should be different from the original sample by a factor of x.

To address these challenges, in “SimPer: Simple Self-Supervised Learning of Periodic Targets”, published at the eleventh International Conference on Learning Representations (ICLR 2023), we introduced a self-supervised contrastive framework for learning periodic information in data. Specifically, SimPer leverages the temporal properties of periodic targets using temporal self-contrastive learning, where positive and negative samples are obtained through periodicity-invariant and periodicity-variant augmentations from the same input instance. We propose periodic feature similarity that explicitly defines how to measure similarity in the context of periodic learning. Moreover, we design a generalized contrastive loss that extends the classic InfoNCE loss to a soft regression variant that enables contrasting over continuous labels (frequency). Next, we demonstrate that SimPer effectively learns period feature representations compared to state-of-the-art SSL methods, highlighting its intriguing properties including better data efficiency, robustness to spurious correlations, and generalization to distribution shifts. Finally, we are excited to release the SimPer code repo with the research community.

The SimPer framework

SimPer introduces a temporal self-contrastive learning framework. Positive and negative samples are obtained through periodicity-invariant and periodicity-variant augmentations from the same input instance. For temporal video examples, periodicity-invariant changes are cropping, rotation or flipping, whereas periodicity-variant changes involve increasing or decreasing the speed of a video.

To explicitly define how to measure similarity in the context of periodic learning, SimPer proposes periodic feature similarity. This construction allows us to formulate training as a contrastive learning task. A model can be trained with data without any labels and then fine-tuned if necessary to map the learned features to specific frequency values.

Given an input sequence x, we know there’s an underlying associated periodic signal. We then transform x to create a series of speed or frequency altered samples, which changes the underlying periodic target, thus creating different negative views. Although the original frequency is unknown, we effectively devise pseudo- speed or frequency labels for the unlabeled input x.

Conventional similarity measures such as cosine similarity emphasize strict proximity between two feature vectors, and are sensitive to index shifted features (which represent different time stamps), reversed features, and features with changed frequencies. In contrast, periodic feature similarity should be high for samples with small temporal shifts and or reversed indexes, while capturing a continuous similarity change when the feature frequency varies. This can be achieved via a similarity metric in the frequency domain, such as the distance between two Fourier transforms.

To harness the intrinsic continuity of augmented samples in the frequency domain, SimPer designs a generalized contrastive loss that extends the classic InfoNCE loss to a soft regression variant that enables contrasting over continuous labels (frequency). This makes it suitable for regression tasks, where the goal is to recover a continuous signal, such as a heart beat.

SimPer constructs negative views of data through transformations in the frequency domain. The input sequence x has an underlying associated periodic signal. SimPer transforms x to create a series of speed or frequency altered samples, which changes the underlying periodic target, thus creating different negative views. Although the original frequency is unknown, we effectively devise pseudo speed or frequency labels for unlabeled input x (periodicity-variant augmentations τ). SimPer takes transformations that do not change the identity of the input and defines these as periodicity-invariant augmentations σ, thus creating different positive views of the sample. Then, it sends these augmented views to the encoder f, which extracts corresponding features.

Results

To evaluate SimPer’s performance, we benchmarked it against state-of-the-art SSL schemes (e.g., SimCLR, MoCo v2, BYOL, CVRL) on a set of six diverse periodic learning datasets for common real-world tasks in human behavior analysis, environmental remote sensing, and healthcare. Specifically, below we present results on heart rate measurement and exercise repetition counting from video. The results show that SimPer outperforms the state-of-the-art SSL schemes across all six datasets, highlighting its superior performance in terms of data efficiency, robustness to spurious correlations, and generalization to unseen targets.

Here we show quantitative results on two representative datasets using SimPer pre-trained using various SSL methods and fine-tuned on the labeled data. First, we pre-train SimPer using the Univ. Bourgogne Franche-Comté Remote PhotoPlethysmoGraphy (UBFC) dataset, a human photoplethysmography and heart rate prediction dataset, and compare its performance to state-of-the-art SSL methods. We observe that SimPer outperforms SimCLR, MoCo v2, BYOL, and CVRL methods. The results on the human action counting dataset, Countix, further confirm the benefits of SimPer over others methods as it notably outperforms the supervised baseline. For the feature evaluation results and performance on other datasets, please refer to the paper.

Results of SimCLR, MoCo v2, BYOL, CVRL and SimPer on the Univ. Bourgogne Franche-Comté Remote PhotoPlethysmoGraphy (UBFC) and Countix datasets. Heart rate and repetition count performance is reported as mean absolute error (MAE).

Conclusion and applications

We present SimPer, a self-supervised contrastive framework for learning periodic information in data. We demonstrate that by combining a temporal self-contrastive learning framework, periodicity-invariant and periodicity-variant augmentations, and continuous periodic feature similarity, SimPer provides an intuitive and flexible approach for learning strong feature representations for periodic signals. Moreover, SimPer can be applied to various fields, ranging from environmental remote sensing to healthcare.

Acknowledgements

We would like to thank Yuzhe Yang, Xin Liu, Ming-Zher Poh, Jiang Wu, Silviu Borac, and Dina Katabi for their contributions to this work.

Categories
Misc

Advanced API Performance: Pipeline State Objects

A graphic of a computer sending code to multiple stacks.Pipeline state objects (PSOs) define how input data is interpreted and rendered by the hardware when submitting work to the GPUs. Proper management of PSOs is…A graphic of a computer sending code to multiple stacks.

Pipeline state objects (PSOs) define how input data is interpreted and rendered by the hardware when submitting work to the GPUs. Proper management of PSOs is essential for optimal usage of system resources and smooth gameplay.

Recommended:

  • Create PSOs on worker threads asynchronously.
    • PSO creation is where shaders compilation and related stalls happen.
  • Start with generic PSOs with generic shaders that compile quickly and generate specializations later.
    • This gets you up and running faster even if you are not running the most optimal PSO or shader yet.
    • Shaders shared between PSOs will only compile once.
  • Avoid runtime PSO compilations as they most likely will lead to stalls.
    • The driver-managed shader disk cache may come to the rescue.
  • Use PSO libraries.
  • Use identical sensible defaults for don’t care fields wherever possible.
    • This allows for more possibilities for PSO reuse
  • Use the /all_resources_bound / D3DCOMPILE_ALL_RESOURCES_BOUND compile flag if possible.
    • The compiler can do a better job at optimizing texture accesses. 
  • Arrange draw calls by PSO & tessellation usage.
  • Remember that PSO creation is where shaders are compiled and stalls are introduced.
    • It is really important to create PSO asynchronously and early enough before they are used.
    • Tread carefully with thread priorities for PSO compilation threads.
    • Use Idle priority if there is no ‘hurry’ to prevent slowdowns for game threads.
    • Consider temporarily boosting priorities when there is a ‘hurry’

Not recommended:

  • Toggling between compute and graphics on the same command queue more than necessary.
    • This is still a heavyweight switch to make.
  • Toggling tessellation on/off more than necessary.
    • This is also a heavyweight switch to make.
    • It is really important to create PSO asynchronously and early enough before they are used.
    • Tread carefully with thread priorities for PSO compilation threads.
    • Use Idle priority if there is no ‘hurry’ to prevent slowdowns for game threads.
    • Consider temporarily boosting priorities when there is a ‘hurry’
  • Using FXC to generate DXBC in DX12.
    • This causes extra DXBC to DXIL translation, increasing compilation time and PSO library size.
  • Serializing very large (hundreds of thousands) numbers of PSOs to disk in PSO libraries at once.
    • This may significantly bloat the usage of system memory.
    • Use the “miss and update the PSO library” strategy instead.

This post covers best practices when working with pipeline state objects on NVIDIA GPUs. To get a high and consistent frame rate in your applications, see all Advanced API Performance tips.

Acknowledgments

Thanks to Patrick Neil and Dhiraj Kumar for their advice and assistance.

Categories
Misc

Developing a Pallet Detection Model Using OpenUSD and Synthetic Data

Stacked palletsImagine you are a robotics or machine learning (ML) engineer tasked with developing a model to detect pallets so that a forklift can manipulate them. ‌You are…Stacked pallets

Imagine you are a robotics or machine learning (ML) engineer tasked with developing a model to detect pallets so that a forklift can manipulate them. ‌You are familiar with traditional deep learning pipelines, you have curated manually annotated datasets, and you have trained successful models. 

You are ready for the next challenge, which comes in the form of large piles of densely stacked pallets. You might wonder, where should I begin? ‌Is 2D bounding box detection or instance segmentation most useful for this task? ‌Should I do 3D bounding box detection and, if so, how will I annotate it? ‌Would it be best to use a monocular camera, stereo camera, or lidar for detection? ‌Given the sheer quantity of pallets that occur in natural warehouse scenes, manual annotation will not be an easy endeavor. And if I get it wrong, it could be costly.

This is what I wondered when faced with a similar situation. Fortunately, I had an easy way to get started with relatively low commitment: synthetic data.

Overview of synthetic data

Synthetic Data Generation (SDG) is a technique for generating data to train neural networks using rendered images rather than real-world images. ‌The advantage of using synthetically rendered data is that you implicitly know the full shape and location of objects in the scene and can generate annotations like 2D bounding boxes, keypoints, 3D bounding boxes, segmentation masks, and more. ‌

Synthetic data can be a great way to bootstrap a deep learning project, as it enables you to rapidly iterate on ideas before committing to large manual data annotation efforts or in cases where data is limited, restricted, or simply does not exist. For such cases, you might find that synthetic data with domain randomization works very well for your application out-of-the-box first try. ‌And viola–you save time. 

Alternatively, you might find that you need to redefine the task or use a different sensor modality.  Using synthetic data, you can experiment with these decisions without committing to a costly annotation effort.  

In many cases, you may still benefit from using some real-world data. ‌The nice part is, by experimenting with synthetic data you will have more familiarity with the problem, and can invest your annotation effort where it counts the most. Each ML task presents its own challenges, so it is difficult to determine exactly how synthetic data will fit in, whether you will need to use real-world data, or a mix of synthetic and real data.  

Using synthetic data to train a pallet segmentation model

When considering how to use synthetic data to train a pallet detection model, our team started small. Before we considered 3D box detection or anything complex, we first wanted to see if we could detect anything at all using a model trained with synthetic data. To do so, we rendered a simple dataset of scenes containing just one or two pallets with a box on top. ‌We used this data to train a semantic segmentation model.  

We chose to train a semantic segmentation model because the task is well defined and the model architectures are relatively simple. It is also possible to visually identify where the model is failing (the incorrectly segmented pixels).

To train the segmentation model, the team first rendered coarse synthetic scenes (Figure 1).

A rendering of two pallets with a box on top. ‌The rendering is coarse, and the box is a uniform gray color.
Figure 1. A coarse synthetic rendering of two pallets with a box on top

The team suspected that these rendered images alone would lack the diversity to train a meaningful pallet detection model. ‌We also decided to experiment with augmenting the synthetic renderings using generative AI to produce more realistic images.‌ Before training, we applied generative AI to these images to add variation that we believed would improve the ability of the model to generalize to the real world.  

This was done using a depth conditioned generative model, which roughly preserved the pose of objects in the rendered scene. Note that using generative AI is not required when working with SDG. You could also try using traditional domain randomization, like varying the synthetic textures, colors, location, and orientation of the pallets. ‌You may find that traditional domain randomization by varying the rendered textures is sufficient for the application.

An image of the synthetically rendered scene augmented using generative AI.  The augmented image looks photorealistic, and the uniform gray box is replaced with a plastic wrapped box.
Figure 2. The synthetic rendering, augmented using generative AI

After rendering about 2,000 of these synthetic images, we trained a resnet18-based Unet segmentation model using PyTorch. ‌Quickly, the results showed great promise on real-world images (Figure 3).

An image showing a single pallet with a box on top. ‌The pallet is highlighted in green to show the semantic segmentation result.
Figure 3. Real-world pallet image, tested with segmentation model 

The model could accurately segment the pallet. Based on this result, we developed more confidence in the workflow, but the challenge was far from over. Up to this point, the team’s approach did not distinguish between instances of pallets, and it did not detect pallets that were not placed on the floor. ‌For images like the one shown in Figure 4, the results were barely usable. This likely meant that we needed to adjust our training distribution.

An image showing the semantic segmentation results on a warehouse scene with pallets and stacked boxes.  The segmentation model fails to detect pallets that aren't on the floor.
Figure 4. Semantic segmentation model fails to detect stacked pallets

Iteratively increasing the data diversity to improve accuracy

To improve the accuracy of the segmentation model, the team added more images of a wider variety of pallets stacked in different random configurations. We added about 2,000 more images to our dataset, bringing the total to about 4,000 images. ‌We created the stacked pallet scenes using the USD Scene Construction Utilities open-source project. 

USD Scene Construction Utilities was used to position pallets relative to each other in configurations that reflect the distribution you might see in the real world. ‌We used Universal Scene Description (OpenUSD) SimReady Assets, which offered a large diversity of pallet models to choose from.

Images of stacked pallets rendered using Omniverse Replicator.  The pallets vary in type, color and orientation.
Figure 5. Structured scenes created using the USD Python API and USD Scene Construction Utilities, and further randomized and rendered with Omniverse Replicator

Training with the stacked pallets, and with a wider variety of viewpoints, we were able to improve the accuracy of the model for these cases.

If adding this data helped the model, why generate only 2,000 images if there is no added annotation cost? We did not start with many images because we were sampling from the same synthetic distribution. ‌Adding more images would not necessarily add much diversity to our dataset. Instead, we might just be adding many similar images without‌ improving the model’s real-world accuracy.  

Starting small enabled the team to quickly train the model, see where it failed, and adjust the SDG pipeline and add more data. ‌For example, after noticing the model had a bias towards specific colors and shapes of pallets, we added more synthetic data to address these failure cases.

A rendering of scenes containing plastic pallets in many different colors.
Figure 6. ‌A rendering of plastic pallets in various colors

These data variations improved the model’s ability to handle the failure scenarios it encountered (plastic and colored pallets).

If data variation is good, why not just go all-out and add a lot of variation at once? Until our team began testing on real-world data, it was difficult to tell what variance might be required. ‌We might have missed important factors needed to make the model work well. Or, we might have overestimated the importance of other factors, exhausting our effort unnecessarily. ‌By iterating, we better understood what data was needed for the task.

Extending the model for pallet side face center detection

Once we had some promising results with segmentation, the next step was to adjust the task from semantic segmentation to something more practical. ‌We decided that the simplest next task to evaluate was detecting the center of the pallet side faces. 

An image showing a rendered sample with a heat map overlaid on top of the center of the pallet’s side faces.
Figure 7. Example data for the pallet side face center detection task

The pallet side face center points are where a forklift would center itself when manipulating the pallet. ‌While more information may be necessary in practice to manipulate the pallet (such as the distance and angle at this point), we considered this point a simple next step in this process that enables the team to assess how useful our data is for any downstream application.  

Detecting these points could be done with heat map regression, which, like segmentation, is done in the image domain, is easy to implement, and simple to visually interpret. ‌By training a model for this task, we could quickly assess how useful our synthetic dataset is at training a model to detect important key points for manipulation.

The results after training were promising, as shown in Figure 8.

Multiple images showing the heat maps of the pallet side face detection model in multiple scenarios. ‌The scenarios include pallets side by side on the floor, pallets stacked neatly on top of each other, and pallets stacked with boxes.
Figure 8. Real-world detection results for the pallet side face detection model

The team confirmed the ability to detect the pallet side faces using synthetic data, even with closely stacked pallets. We continued to iterate on the data, model, and training pipeline to improve the model for this task. 

Extending the model for corner detection

‌When we reached a satisfactory point for the side face center detection model, we explored taking the task to the next level: detecting the corners of the box.  The initial approach was to use a heat map for each corner, similar to the approach for the pallet side face centers.

An image showing the heatmap detection for the corners of a pallet with a box on top.  The heat map for the corners that are occluded are blurry, indicating the difficulty the model has in predicting the precise location of these points.
Figure 9. ‌Pallet corner detection model using heat maps

However, this approach quickly presented a challenge. Because the object for detection had unknown dimensions, it was difficult for the model to precisely infer where the corner of the pallet should be if it was not directly visible. Using heat maps, if the peak values are inconsistent, it is difficult to parse them reliably.

So, instead of using heat maps, we chose to regress the corner locations after detecting the face center peak. We trained a model to infer a vector field that contains the offset of the corners from a given pallet face center. ‌This approach quickly showed promise for this task, and we could provide meaningful estimates of corner locations, even with large occlusions.

An image showing four pallets in a cluttered scene. The pallets are detected and their shape is approximately determined. This shows the ability of the regression model to handle the heat map model’s failure case.
Figure 10. ‌The pallet detection results using face center heat map and vector field-based corner regression

Now that the team had a promising working pipeline, we iterated and scaled this process to address different failure cases that arose. In total, our final model was trained on roughly 25,000 rendered images. Trained at a relatively low resolution (256 x 256 pixels), our model was capable of detecting small pallets by running inference at higher resolutions. In the end, we were able to detect challenging scenes, like the one above, with relatively high accuracy.

This was something we could use–all created with synthetic data. This is where our pallet detection model stands today.

An image showing nearly 100 pallets, some of varied shape, stacked in a warehouse.  The model detects each pallet except a few in the background.
Figure 11. ‌The final pallet model detection results, with only the front face of the detection shown for ease of visualization
A gif of the pallet detection model running in real time detecting a single black plastic pallet.  The video is shaky and blurry, demonstrating the ability of the model to detect the pallet even under adverse conditions.
Figure 12. The pallet detection model running in real time

Get started building your own model with synthetic data

By iteratively developing with synthetic data, our team developed a pallet detection model that works on real-world images. Further progress may be possible with more iteration. Beyond this point, our task might benefit from the addition of real-world data. However, without synthetic data generation, we could not have iterated as quickly, as each change we made would have required new annotation efforts.

If you are interested in trying this model, or are working on an application that could use a pallet detection model, you can find both the model and inference code by visiting SDG Pallet Model on GitHub. The repo includes the pretrained ONNX model as well as instructions to optimize the model with TensorRT and run inference on an image. The model can run in real time on NVIDIA Jetson AGX Orin, so you will be able to run it at the edge. 

You can also check out the recently open-sourced project, USD Scene Construction Utilities, which contains examples and utilities for building USD scenes using the USD Python API. 

We hope our experience inspires you to explore how you can use synthetic data to bootstrap your AI application. If you’d like to get started with synthetic data generation, NVIDIA offers a suite of tools to simplify the process. These include:

  1. Universal Scene Description (OpenUSD): Described as HTML of the metaverse, USD is a framework for fully describing 3D worlds. Not only does USD include primitives like 3D object meshes, but it also has the ability to describe materials, lighting, cameras, physics and more. 
  2. NVIDIA Omniverse Replicator: A core extension of the NVIDIA Omniverse platform, Replicator enables developers to generate large and diverse synthetic training data to bootstrap perception model training. With features such as easy-to-use APIs, domain randomization, and multi-sensor simulation, Replicator can address the lack of data challenge and accelerate the model training process. 
  3. SimReady Assets: Simulation-ready assets are physically accurate 3D objects that encompass accurate physical properties, behavior, and connected data streams to represent the real world in simulated digital worlds. NVIDIA offers a collection of realistic assets and materials that can be used out-of-the-box for constructing 3D scenes. This includes a variety of assets related to warehouse logistics, like pallets, hand trucks, and cardboard boxes. To search, display, inspect, and configure SimReady assets before adding them to an active stage, you can use the SimReady Explorer extension. Each SimReady asset has its own predefined semantic label, making it easier to generate annotated data for segmentation or object detection models. 

If you have questions about the pallet model, synthetic data generation with NVIDIA Omniverse, or inference with NVIDIA Jetson, reach out on GitHub or visit the NVIDIA Omniverse Synthetic Data Generation Developer Forum and the NVIDIA Jetson Orin Nano Developer Forum.

Explore what’s next in AI at SIGGRAPH

Join us at SIGGRAPH 2023 for a powerful keynote by NVIDIA CEO Jensen Huang. You’ll get an exclusive look at some of our newest technologies, including award-winning research, OpenUSD developments, and the latest AI-powered solutions for content creation.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. If you’re a developer, get started building your first extension or developing a Connector with Omniverse resources. Stay up-to-date on the platform by subscribing to the newsletter, and following NVIDIA Omniverse on Instagram, Medium, and Twitter. For resources, check out our forums, Discord server, Twitch, and YouTube channels.

Categories
Misc

Research Unveils Breakthrough Deep Learning Tool for Understanding Neural Activity and Movement Control

A black and white GIF out a mouse walking on a wheel.A primary goal in the field of neuroscience is understanding how the brain controls movement. By improving pose estimation, neurobiologists can more precisely…A black and white GIF out a mouse walking on a wheel.

A primary goal in the field of neuroscience is understanding how the brain controls movement. By improving pose estimation, neurobiologists can more precisely quantify natural movement and in turn, better understand the neural activity that drives it. This enhances scientists’ ability to characterize animal intelligence, social interaction, and health. 

Columbia University researchers recently developed a video-centric deep learning package that tracks animal movement more robustly from video, which helps: 

  • obtain reliable pose predictions in the face of occlusions and dataset shifts. 
  • train on images and videos simultaneously, while significantly shortening training time.
  • simplify the software engineering needed to train models, form predictions, and visualize the results

Named Lightning Pose, the tool trains deep learning models in PyTorch Lightning on both labeled images and unlabeled videos, which are decoded and processed on the GPU using NVIDIA DALI.

In this blog post, you’ll see how contemporary computer vision architectures benefit from open-source, GPU-accelerated video processing. 

Deep learning algorithms for automatic pose tracking in video have recently garnered much attention in neuroscience. ‌The standard approach involves training a convolutional network in a fully supervised approach on a set of annotated images. ‌

Most convolutional architectures are built for handling single images and don’t use the useful temporal information hidden in videos. ‌By tracking each keypoint individually, these networks may generate nonsensical poses or ones that are inconsistent across multiple cameras.‌ Despite its wide adoption and success, the prevailing approach tends to overfit the training set and struggles to generalize to unseen animals or laboratories.

An efficient approach to animal pose tracking

The Lightning Pose package, represented in Figure 1, is a set of deep learning models for animal pose tracking, implemented in PyTorch Lightning. It takes a video-centric and semi-supervised approach to training of the pose estimation models. ‌In addition to training on a set of labeled frames, it trains on many unlabeled video clips and penalizes itself when its sequences of pose predictions are incoherent (that is, violate basic spatiotemporal constraints). ‌The unlabeled videos are decoded and processed on the fly directly on a GPU using DALI.

The three-layered approach to pose estimation. The PyTorch Lighting layer (0) covers the data loaders, the architecture, and losses calculation. ‌The second layer (1) covers the model design. The third layer (2) is where Hydra covers the configuration and hyperparameters are swept.
 Figure 1: The structure of the Lightning Pose package. Data loading (including DALI video readers), models, and a loss factory, are wrapped inside a PyTorch Lightning trainer and a Hydra configurator

During training, videos are randomly modified, or augmented, in various ways by DALI. This exposes the network to a wider range of training examples and prepares it better for unexpected systematic variations in the data it may encounter when deployed.

Its semi-supervised architecture, shown in Figure 2, learns from both labeled and unlabeled frames.

Lighting Pose consists of a backbone that consumes a few labeled frames and many unlabeled videos. The results are transferred to the head that predicts keypoints for both labeled and unlabeled frames. When labels are available, a supervised loss is applied. For unlabeled videos, Lightning Pose applies a set of unsupervised losses.
Figure 2. The Lightning pose architecture diagram combining supervised learning (top) with unsupervised learning (bottom)

Lightning Pose results in more accurate and precise tracking compared to standard supervised networks, across different species (mice, fish, and so on) and tasks (full-body locomotion, eye tracking, and so on). The traditional fully supervised approach requires extensive image labeling and struggles to generalize to new videos. It often produces noisy outputs that hinder downstream analyses.

Its new pose estimation networks generalize better to unseen videos and provide smoother and more reliable pose trajectories. The tool also enhances robustness and usability. ‌Through semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools, models have lower pixel errors compared to DeepLabCut (with as few as 75 labeled frames). Lightning Pose estimation improves by 40, lowering pixel error and average keypoint pixel error across frames (DeepLabCut 14.60±4).

The clearest gains were seen in a mouse pupil tracking dataset from the International Brain Lab, where, even with over 3,000 labeled frames, the predictions were more accurate, and led to more reliable scientific analyses. 

Prediction comparison of mouse pupil tracking between DeepLabCut model and Lightning Pose, and Lightning Pose combined with Ensemble Kalman Smoothing
Figure 3. Visualization of a mouse pupil tracking 

Figure 3 shows the tracking top, bottom, left, and right corners of a mouse’s pupil during a neuroscience experiment. On the left, the DeepLabCut model provides a significant number of predictions in implausible parts of the image (red boxes). 

The center shows Lightning Pose predictions and the right, combines Lightning Pose with the authors’ Ensemble Kalman smoothing approach. Both Lightning Pose approaches nicely track the four points and predict them in plausible areas. 

Improved pupil tracking in turn exposes stronger correlations with neural activity. The authors performed a regression between neural activity and tracked pupil diameter across 66 neuroscience experiments, and found that the model outputs were decoded more reliably from brain activity. 

Pupil diameter value comparison. Blue values are those extracted by Lightning Pose tracking (+Ensemble Kalman Smoothing) compared to the prediction of a decoder trained on neural data (ridge regression).
Figure 4. Pupil diameter extracted from the model compared to ‌neural data

Figure 4 shows ‌pupil diameter decoding from brain recordings. The left side of Figure 4 graphs pupil diameter time series derived from a Lightning Pose model (LP+EKS; blue), and the predictions from applying linear regression to neural data (orange). 

The right side of Figure 4 shows R2 goodness-of-fit values quantifying how well pupil diameter can be decoded from neural activity. As shown, Lightning Pose and the ensemble version produce significantly better results DLC R2=0.27±0.02; LP 0.33±0.02; LP+EKS 0.35±0.02.

The following video shows the robustness of the predictions for a mouse running on a treadmill.

Video 1: Example prediction of the mouse leg position (blue: lightning pose, red: supervised baseline model)

Improving the image-centric approach to convolutional architectures with DALI 

Applying convolutional networks to videos presents a unique challenge: these networks typically operate on individual images. Despite the growing computational power of deep learning accelerators, such as new GPU generations, Tensor Cores, and CUDAGraphs, this image-centric approach has remained largely unchanged. Current architectures require videos to be split into individual frames during pre-processing, where they are often saved on a Disc for later loading. These frames are then augmented and transformed on the CPU before being fed to the network waiting on the GPU.

Lightning Pose leverages DALI for GPU-accelerated decoding and processing of videos. This stands in contrast to most computer vision deep learning architectures, such as ResNets and Transformers, that typically operate only on single images. When applied sequentially to videos, these architectures (and the popular neuroscience tools of DeepLabCut and SLEAP that are based on them) often form discontinuous predictions that violate the laws of physics. For example, an object jumping from one corner of a room to another, in two consecutive video frames.  

DALI Stack showing how it takes the data from the storage (image, video, or AU), uses GPU acceleration to decode and transform, and makes it ready to be used further in the training. Or for the inference process by the deep learning framework.
Figure 5: DALI functional flow

DALI offers an efficient solution for Lightning Pose, by:

  1. reading the videos. 
  2. handling the decoding process (thanks to the NVIDIA Video Codec SDK).
  3. applying various augmentations (rotation, resize, brightness, and contrast adjustment, or even adding shot noise). 

Using DALI, Lightning Pose increases training throughput for video data and maintains the desired performance of the whole solution by fully using GPUs.

DALI can also be combined with additional data loaders working in parallel. The International Brain Laboratory, a consortium of 16 different neuroscience labs, is currently integrating DALI loaders to predict poses in 30,000 neuroscience experiments.

The benefit of open-source cooperation

The research is a great example of value created by the cooperation of the open-source community. DALI and Lightning Pose, both open-source projects, are highly responsive to community feedback and inquiries on GitHub. The collaboration between these projects began in mid-2021 when Dan Biderman, a community member, started evaluating DALI technology. Dan’s proactive engagement and the DALI team’s swift responses fostered a productive dialogue, which led to its integration into Lightning Pose.

Download and try DALI and Lightning Pose and DALI; you can reach out to contacts for both directly through their GitHub pages.

Read the study, Improved animal estimation through semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools.

Categories
Misc

Reborn, Remastered and Remixed: ‘Portal: Prelude RTX’ Rejuvenates Legendary Gaming Mod

The “Portal: Prelude RTX” gaming mod — a remastering of the popular unofficial “Portal” prequel — comes with full ray tracing, DLSS 3 and RTX IO technology for cutting-edge, AI-powered graphics that rejuvenate the legendary mod for gamers, creators, developers and others to experience it anew.

Categories
Misc

New Video: Visualizing Census Data with RAPIDS cuDF and Plotly Dash

A US map showing different colors representing data visualization.Gathering business insights can be a pain, especially when you’re dealing with countless data points.  It’s no secret that GPUs can be a time-saver for…A US map showing different colors representing data visualization.

Gathering business insights can be a pain, especially when you’re dealing with countless data points. 

It’s no secret that GPUs can be a time-saver for data scientists. Rather than wait for a single query to run, GPUs help speed up the process and get you the insights you need quickly.

In this video, Allan Enemark, RAPIDS data visualization lead, uses a US Census dataset with over 300 million data points to demo running queries uninterrupted during the analysis process when using RAPIDS cuDF and Plotly Dash.

Key takeaways

  • Using cuDF over pandas for millions of data points results in significant performance benefits, with each query taking less than 1 second to run.
  • There are several advantages to using integrated accelerated visualization frameworks, such as faster analysis iterations.
  • Replacing CPU-based libraries with the pandas-like RAPIDS GPU-accelerated libraries (such as cuDF) helps data scientists swiftly go through the EDA process, as data sizes increase between 2 and 10 GB
  • Visualization compute and render times are brought down to interactive sub-second speeds, unblocking the insight discovery process.

Video 1. Visualizing Census Data with RAPIDS cuDF and Plotly Dash

Summary

Swapping pandas with a RAPIDS framework like cuDF can help speed up data analytics workflows, making the analysis process more effective and enjoyable.  Additionally, the RAPIDS libraries make it easy to chart all kinds of data–like time series, geospatial, and graphs–by using simple Python code.

To learn more about speeding up your traditional GPU data science workflows, visit these resources: 

Data science promo box.