Categories
Misc

Upgrading Multi-GPU Interconnectivity with the Third-Generation NVIDIA NVSwitch

Increasing demands in AI and high-performance computing (HPC) are driving a need for faster, more scalable interconnects with high-speed communication between…

Increasing demands in AI and high-performance computing (HPC) are driving a need for faster, more scalable interconnects with high-speed communication between every GPU.

The third-generation NVIDIA NVSwitch is designed to satisfy this communication need. This latest NVSwitch and the H100 Tensor Core GPU use the fourth-generation NVLink, the newest high-speed, point-to-point interconnect by NVIDIA.

The third-generation NVIDIA NVSwitch is designed to provide connectivity within a node or to GPUs external to the node for the NVLink Switch System. It also incorporates hardware acceleration for collective operations with multicast and NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) in-network reductions.

NVIDIA NVSwitch is also a critical enabler of the NVLink Switch networking appliance, which enables the creation of clusters with up to 256 connected NVIDIA H100 Tensor Core GPUs and 57.6 TB/s of all-to-all bandwidth. The appliance delivers 9x more bisection bandwidth than was possible with HDR InfiniBand on NVIDIA Ampere Architecture GPUs.

High bandwidth and GPU-compatible operation

The performance needs of AI and HPC workloads continue to grow rapidly and require scaling to multi-node, multi-GPU systems.

Delivering excellent performance at scale requires high-bandwidth communication between every GPU, and the NVIDIA NVLink specification is designed for synergistic operation with NVIDIA GPUs to enable the required performance and scalability.

For instance, the thread-block execution structure of NVIDIA GPUs efficiently feeds the parallelized NVLink architecture. NVLink-Port interfaces have also been designed to match the data exchange semantics of GPU L2 caches as closely as possible.

Faster than PCIe

A key benefit of NVLink is that it offers substantially greater bandwidth than PCIe. Fourth-generation NVLink is capable of 100 Gbps per lane, more than tripling the 32 Gbps bandwidth of PCIe Gen5. Multiple NVLinks can be combined to provide even higher aggregate lane counts, yielding higher throughput.

Lower overhead than traditional networks

NVLink has been designed specifically as a high-speed, point-to-point link to interconnect GPUs, yielding lower overhead than would be present in traditional networks.

This enables many of the complex networking features found in traditional networks—such as end-to-end retry, adaptive routing, and packet reordering—to be traded off for increased port counts.

The greater simplicity of the network interface allows for application–, presentation–, and session-layer functionality to be embedded directly into CUDA itself, further reducing communication overhead.

NVLink generations

First introduced with the NVIDIA P100 GPU, NVLink has continued to advance in lockstep with NVIDIA GPU architectures, with each new architecture accompanied by a new generation of NVLink.

Link count, bandwidth per link and signaling technology for each first-, second-, third-, and fourth-generation NVLink.
Figure 1. NVLink generations with the evolution in-step with GPUs

Fourth-generation NVLink provides 900 GB/s of bidirectional bandwidth per GPU—1.5x greater than the prior generation and more than 5.6x higher than first-generation NVLink.

NVLink-enabled server generations

NVIDIA NVSwitch was first introduced with the NVIDIA V100 Tensor Core GPU and second-generation NVLink, enabling high-bandwidth, any-to-any connectivity between all GPUs in a server.

The NVIDIA A100 Tensor Core GPU introduced third-generation NVLink and second-generation NVSwitch, doubling both per-GPU bandwidth as well as reduction bandwidth.

Four diagrams show all-to-all connectivity for DGX-1 (P100), DGX-2 (V100), DGX A100 (A100), and DGX H100 (H100) servers with NVLink.
Figure 2. NVLink all-to-all connectivity across DGX server generations

With fourth-generation NVLink and third-generation NVSwitch, a system with eight NVIDIA H100 Tensor Core GPUs features 3.6 TB/s of bisection bandwidth and 450 GB/s of bandwidth for reduction operations. These are1.5x and 3x increases compared to the prior generation.

In addition, with fourth-generation NVLink and third-generation NVSwitch as well as the external NVIDIA NVLink Switch, multi-GPU communication across multiple servers at NVLink speeds is now possible.

The largest and fastest switch chip to date 

Third-generation NVSwitch is the largest NVSwitch to date. It is built using the TSMC 4N process customized for NVIDIA. The die incorporates 25.1 billion transistors—more transistors than the NVIDIA V100 Tensor Core GPU—in an area of 294 mm2. The package dimensions are 50 mm x 50 mm with a total of 2645 solder balls.

NVSwitch die image showing crossbar in the center, port logic blocks (including SHARP accelerators) on each side of the cross bar, and two blocks of 32 PHY lanes on each of the top and bottom for a total of 128 PHY lanes.
Figure 3. Third-generation NVSwitch chip characteristics include it being the largest NVSwitch with the highest bandwidth ever and 400 GFlops of FP32 SHARP

NVLink network support

Third-generation NVSwitch is a key enabler of the NVLink Switch System, which enables connectivity between GPUs across nodes at NVLink speeds.

It incorporates physical (PHY) electrical interfaces that are compatible with 400 Gbps Ethernet and InfiniBand connectivity. The included management controller now provides support for attached Octal Small Formfactor Pluggable (OSFP) modules with four NVLinks per cage. With custom firmware, active cables can be supported.

Additional forward error correction (FEC) modes have also been added to enhance NVLink Network performance and reliability.

A security processor has also been added to protect data and chip configuration from attacks. The chip provides partitioning features that can isolate subsets of ports into separate NVLink Networks. Expanded telemetry features also enable InfiniBand-style monitoring.

Double the bandwidth

Third-generation NVSwitch is our highest-bandwidth NVSwitch yet.

With 100 Gbps of bandwidth per differential pair using 50 Gbaud PAM4 signaling, third-generation NVSwitch provides 3.2 TB/s of full-duplex bandwidth across 64 NVLink ports (x2 per NVLink). It delivers more bandwidth in a system while also requiring fewer NVSwitch chips compared to the prior generation. All ports on third-generation NVSwitch are NVLink Network–capable.

SHARP collectives and multicast support

Third-generation NVSwitch includes a host of new hardware blocks for SHARP acceleration:

Block diagram showing management, port logic, PCIe I/O, crossbar, and NVLinks.
Figure 4. Third-generation NVSwitch block diagram
  • A SHARP controller
  • SHARP arithmetic logic units (ALUs) highly leveraged from those in the NVIDIA Hopper Architecture
  • Embedded SRAM to support the SHARP calculations

The embedded ALUs offer up to 400 FLOPS of FP32 throughput and have been added to perform reduction operations directly in NVSwitch, rather than by the GPUs in the system.

These ALUs support a wide variety of operators, such as logical, min/max, and add. They also support data formats such as signed/unsigned integers, FP16, FP32, FP64, and BF16.

Third-generation NVSwitch also includes a SHARP controller that can manage up to 128 SHARP groups in parallel. The crossbar bandwidth in the chip has been increased to carry additional SHARP-related exchanges.

all-reduce operation compatibility

A key use case for NVIDIA SHARP is for all-reduce operations that are common in AI training. When training networks using multiple GPUs, batches are split into smaller subbatches, which are then assigned to each individual GPU.

Each GPU processes their individual subbatches through the network parameters, yielding possible changes to the parameters, also known as local gradients. These local gradients are combined and reconciled to produce global gradients, which each GPU applies to their parameter tables. This averaging process is also known as an all-reduce operation.

On the left is a basic training flow, and on the right, a multi-GPU training flow using NCCL AllReduce.
Figure 5. NCCL AllReduce in AI training with critical communication-intensive operation

NVIDIA Magnum IO is the architecture for data center IO to accelerate multi-GPU and multi-node communications. It enables HPC, AI, and scientific applications to scale performance on new large GPU clusters scaled using NVLink and NVSwitch.

Magnum IO includes the NVIDIA Collective Communication Library (NCCL), which implements a wealth of multi-GPU and multi-node collective primitives, including all-reduce.

NCCL AllReduce takes as input the local gradients, partitions them into subsets, collects all subsets of a certain level and assigns it to a single GPU. The GPU then performs the reconciliation process for that subset, such as summing across local gradient values from all GPUs.

Following this process, a global set of gradients is produced and then distributed to all other GPUs.

NCCL AllReduce summing gradients across GPUs by exchanging partial local gradients, reducing (sum) partials and broadcasting reduced partials.
Figure 6. Traditional all-reduce calculation with data-exchange and parallel calculation

These processes are highly communication-intensive and the associated communication overhead can substantially lengthen the overall time to train.

With the NVIDIA A100 Tensor Core GPU, third-generation NVLink, and second-generation NVSwitch, the process of sending and receiving partials yields 2N reads (where N is the number of GPUs). The process of broadcasting results yields 2N writes for 2N reads and 2N writes at each GPU interface, or 4N total operations.

A100 and H100 + NVLink SHARP operations required in the read and reduce step and then in the broadcast result step with traffic summary at each GPU in
Figure 7. NVLink SHARP acceleration

The SHARP engines are inside of third-generation NVSwitch. Instead of distributing the data to each GPU and having the GPUs perform the calculations, the GPUs send their data into third-generation NVSwitch chips. The chips then perform the calculations and then send the results back. This results in a total of 2N+2 operations, or approximately halving the number of read/write operations needed to perform the all-reduce calculation.

Boosting performance for large-scale models

With the NVLink Switch System providing 4.5x more bandwidth than InfiniBand, large-scale model training becomes more practical.

For example, when training a recommendation engine with 14 TB embedding tables, we expect a significant performance uplift in performance for H100 using the NVLink Switch System compared to H100 using InfiniBand.

In an example recommender system with 14 TB embedding tables, H100 with NVLink Switch System provides a significant performance boost over H100 with InfiniBand.
Figure 8. NVLink Switch System features 4.5x more bandwidth than maximum InfiniBand

NVLink Network

In prior generations of NVLink, each server had its own local address space used by GPUs within a server when communicating to each other over NVLink. With the NVLink Network, each server has its own address space, which is used when GPUs send data across the network, providing isolation and improved security when sharing data. This capability leverages functionality built into the latest NVIDIA Hopper GPU Architecture.

While NVLink performs connection setup during the system boot process, the NVLink Network connection setup is performed through a runtime API call by software. This enables the network to be reconfigured on the fly as different servers come online and as users enter and exit.

Diagram showing source and destination GPUs communicating through NVLink Switch System and a table summarizing the differences between NVLink and NVLink Switch System.
Figure 9. NVLink Switch System changes compared to NVLink

Table 1 shows how traditional networking concepts map to their counterparts in NVLink Network.

Concept​ Traditional Example​ NVLink Network​
Physical Layer​ 400G electrical/optical media​ Custom-FW OSFP​
Data Link Layer​ Ethernet​ NVLink custom on-chip HW and FW​
Network Layer​ IP​ New NVLink Network Addressing and Management Protocols​
Transport Layer​ TCP​ NVLink custom on-chip HW and FW​
Session Layer​ Sockets​ SHARP groups​CUDA export of Network addresses of data-structures​
Presentation Layer​ TSL/SSL​ Library abstractions (e.g., NCCL, NVSHMEM)​
Application Layer​ HTTP/FTP​ AI Frameworks or User Apps​
NIC​ PCIe NIC (card or chip)​ Functions embedded in GPU and NVSwitch​
RDMA Off-Load​ NIC Off-Load Engine​ GPU-internal  Copy Engine​
Collectives Off-Load​ NIC/Switch Off-Load Engine​ NVSwitch-internal SHARP Engines​
Security Off-Load​ NIC Security Features​ GPU-internal Encryption and “TLB” Firewalls​
Media Control​ NIC Cable Adaptation​ NVSwitch-internal OSFP-cable controllers​
Table 1. Traditional networking concepts mapped to their counterparts with the NVLink Switch System

DGX H100

NVIDIA DGX H100 is the latest iteration of the DGX family of systems based on the latest NVIDIA H100 Tensor Core GPU and incorporates:

  • 8x NVIDIA H100 Tensor Core GPUs with 640GB of aggregate GPU memory
  • 4x third-generation NVIDIA NVSwitch chips
  • 18x NVLink Network OSFPs
  • 3.6 TB/s of full-duplex NVLink Network bandwidth provided by 72 NVLinks
  • 8x NVIDIA ConnectX-7 Ethernet/InfiniBand ports
  • 2x dual-port BlueField-3 DPUs
  • Dual Sapphire Rapids CPUs
  • Support for PCIe Gen 5 

Full bandwidth intra-server NVLink

Within a DGX H100, each of the eight H100 Tensor Core GPUs within the system is connected to all four third-generation NVSwitch chips. Traffic is sent across four different switch planes, enabling the aggregation of the links to achieve full all-to-all bandwidth between GPUs in the system.

Half-bandwidth NVLink Network

With NVLink Network, all eight NVIDIA H100 Tensor Core GPUs within a server can half-subscribe 18 NVLinks to H100 Tensor Core GPUs in other servers.

Alternatively, four H100 Tensor Core GPUs in a server can fully subscribe 18 NVLinks to H100 Tensor Core GPUs in other servers. This 2:1 taper is a trade-off made to balance bandwidth with server complexity and cost for this instantiation of the technology.

With SHARP, the bandwidth delivered is equivalent to a full-bandwidth AllReduce.

Multi-rail Ethernet

Within a server, all eight GPUs independently support RDMA from their own dedicated 400 GB NICs. 800 GB/s of aggregate full-duplex bandwidth is possible to non-NVLink Network devices.

DGX H100 SuperPOD

DGX H100 is the building block of the DGX H100 SuperPOD.

  • Built from eight compute racks, each with four DGX H100 servers.
  • Features a total of 32 DGX H100 nodes, incorporating 256 NVIDIA H100 Tensor Core GPUs.
  • Delivers up to a peak of one exaflop of peak AI compute.

The NVLink Network provides 57.6 TB/s bisection bandwidth spanning the entire 256 GPUs. Additionally, the ConnectX-7s across all 32 DGXs and associated InfiniBand switches provide 25.6 TB/s of full duplex bandwidth for use within the pod or for scaling out the multiple SuperPODs.

NVLink Switch

A key enabler of DGX H100 SuperPOD is the new NVLink Switch based on the third-generation NVSwitch chips. DGX H100 SuperPOD includes 18 NVLink Switches.

The NVLink Switch fits in a standard 1U 19-inch form factor, significantly leveraging InfiniBand switch design, and includes 32 OSFP cages. Each switch incorporates two third-generation NVSwitch chips, providing 128 fourth-generation NVLink ports for an aggregate 6.4 TB/s full-duplex bandwidth.

NVLink Switch supports out-of-band management communication and a range of cabling options such as passive copper. With custom firmware, active copper and optical OSFP cables are also supported.

Scale up with NVLink Network

H100 SuperPOD with NVLink Network enables significant increases in bisection and reduce operation bandwidth compared to a DGX A100 SuperPOD with 256 DGX A100

GPUs.

A single DGX H100 delivers 1.5x the bisection and 3x the bandwidth for reduction operations of a single DGX A100. Those speedups grow to 9x and 4.5x in 32 DGX system configurations, each with a total of 256 GPUs.

DGX A100 256 POD connected by HDR InfiniBand compared with DGX H100 256 POD connected through NVLink, with a table comparing dense PFLOPS, bisection bandwidth, and reduction bandwidth.
Figure 10. DGX A100 POD and DGX H100 POD network topologies

Performance benefits for communication-intensive workloads

For workloads with high communication intensity, the performance benefits of NVLink Network can be significant. In HPC, workloads such as Lattice QCD and 8K 3D FFT see substantial benefits because multi-node scaling has been designed into the communication libraries within the HPC SDK and Magnum IO.

NVLink Network can also provide a significant boost when training large language models or recommenders with large embedding tables.

Three bar charts. From left to right, performance of H100 and H100 + NVLink baselined to A100 for HPC, AI Inference using Megatron Turing NLG 530B under latency constraints, and AI training of Mask R-CNN, GPT-3 (14B parameters), DLRM (14TB embeddings), GPT-3 (175B parameters), and MoE Switch-XXL 395B parameters.
Figure 11. NVLink Switch system benefits dependent on communication intensity

Delivering performance at scale

Delivering the highest performance for AI and HPC requires full-stack, data-center scale innovation. High-bandwidth, low-latency interconnect technologies are key enablers of performance at scale.

Third-generation NVSwitch delivers the next big leap for high-bandwidth, low-latency communication between GPUs both within a server, as well as bringing all-to-all GPU communication at full NVLink speed between server nodes.

Magnum IO works integrally with CUDA, HPC SDK, and nearly all deep learning frameworks. It enables AI software—such as large language models, recommender systems, and scientific applications like 3D FFT—to scale across multiple GPUs across multiple nodes using NVLink Switch System right out of the box.

For more information, see NVIDIA NVLink and NVSwitch.

Categories
Misc

Constructing CUDA Graphs with Dynamic Parameters

Ever since its introduction in CUDA 10, CUDA Graphs has been used in a variety of applications. A graph groups a set of CUDA kernels and other CUDA operations…

Ever since its introduction in CUDA 10, CUDA Graphs has been used in a variety of applications. A graph groups a set of CUDA kernels and other CUDA operations together and executes them with a specified dependency tree. It speeds up the workflow by combining the driver activities associated with CUDA kernel launches and CUDA API calls. It also enforces the dependencies with hardware accelerations, instead of relying solely on CUDA streams and events, when possible.

There are two main ways to construct a CUDA graph: explicit API calls and stream capture.

Construct a CUDA graph with explicit API calls

With this way of constructing a CUDA graph, nodes of the graph, formed by the CUDA kernel and CUDA memory operations, are added to the graph by calling the cudaGraphAdd*Node APIs, where * is replaced with the node type. Dependencies between the nodes are set explicitly with APIs.

The upside of constructing CUDA graphs with explicit APIs is that the cudaGraphAdd*Node APIs return node handles (cudaGraphNode_t) that can be used as references for future node updates. Kernel launch configurations and kernel function parameters of a kernel node in an instantiated graph, for example, can be updated with minimal cost with cudaGraphExecKernelNodeSetParams.

The downside is that in scenarios where CUDA graph is used to speed up existing code, constructing CUDA graphs with explicit API calls typically requires a significant number of code changes, especially changes regarding the control flow and function calling structure of the code.

Construct a CUDA graph with stream capture

With this way of constructing a CUDA graph, cudaStreamBeginCapture and cudaStreamEndCapture are placed before and after a code block. All device activities launched by the code block are recorded, captured, and grouped into a CUDA graph. The dependencies among the nodes are inferred from the CUDA stream or event API calls within the stream capture region.

The upside of constructing CUDA graphs with stream capture is that for existing code, fewer code changes are needed. The original code structure can be mostly untouched and graph construction is performed in an automatic way.

There are also downsides to this way of constructing CUDA graphs. Within the stream capture region, all kernel launch configurations and kernel function parameters, as well as the CUDA API call parameters are recorded by value. Whenever any of the configurations and parameters change, the captured and then instantiated graph becomes out-of-date.

Two solutions are provided in the Employing CUDA Graphs in a Dynamic Environment post:

  • The workflow is recaptured. A reinstantiation isn’t needed when the recaptured graph has the same node topology as the instantiated graph, and a whole-graph update can be performed with cudaGraphExecUpdate.
  • Cache CUDA graphs with the set of configurations and parameters as the key. Each set of configurations and parameters is associated with a distinct CUDA graph within the cache. When running the workflow, the set of configurations and parameters are first abstracted into a key. Then the corresponding graph, if it already exists, is found in the cache and launched.

There are, however, workflows where neither solution works well. The recapture-then-update approach works well on paper, but in some cases the recapture and update themselves are expensive. There are also cases where it is simply not possible to associate each set of parameters with a CUDA graph. For example, cases with floating-point number parameters are difficult to cache as there are huge numbers of possible floating-point numbers.

CUDA Graphs constructed with explicit APIs are easy to update but the approach can be too cumbersome and is less flexible. CUDA Graphs can be constructed flexibly with stream capture but the resulting graphs are difficult and expensive to update.

Combined approach

In this post, I provide an approach of constructing CUDA graphs with both the explicit API and stream capture methods, thus achieving the upsides of both and avoiding the downsides of either.

As an example, in a workflow where three kernels are launched sequentially, the first two kernels have static launch configurations and parameters, but the last kernel has a dynamic launch configuration and parameters.

Use stream capture to record the launches of the first two kernels and call explicit APIs to add the last kernel node to the capturing graph. The node handle returned by the explicit APIs is then used to update the instantiated graph with the dynamic configurations and parameters every time before the graph is launched.

The following code example shows the idea:

cudaStream_t stream; 
std::vector _node_list; 
cudaGraphExec_t _graph_exec; 
if (not using_graph) { 
  first_static_kernel>>(static_parameters); 
  second_static_kernel>>(static_parameters); 
  dynamic_kernel>>(dynamic_parameters); 
} else { 
  if (capturing_graph) { 
    cudaStreamBeginCapture(stream, cudaStreamCaptureModeGlobal); 
    first_static_kernel>>(static_parameters); 
    second_static_kernel>>(static_parameters); 

    // Get the current stream capturing graph 

    cudaGraph_t _capturing_graph; 
    cudaStreamCaptureStatus _capture_status; 
    const cudaGraphNode_t *_deps; 
    size_t _dep_count; 
    cudaStreamGetCaptureInfo_v2(stream, &_capture_status, nullptr &_capturing_graph, &_deps, &_dep_count);  

    // Manually add a new kernel node 

    cudaGraphNode_t new_node; 
    cudakernelNodeParams _dynamic_params_cuda; 
    cudaGraphAddKernelNode(&new_node, _capturing_graph, _deps, _dep_count, &_dynamic_params_cuda); 

    // ... and store the new node for future references 

    _node_list.push_back(new_node);  

    // Update the stream dependencies 

    cudaStreamUpdateCaptureDependencies(stream, &new_node, 1, 1); 

    // End the capture and instantiate the graph 

    cudaGraph_t _captured_graph; 
    cudaStreamEndCapture(stream, &_captured_graph);
    cudaGraphInstantiate(&_graph_exec, _captured_graph, nullptr, nullptr, 0); 
  } else if (updating_graph) { 
    cudakernelNodeParams _dynamic_params_updated_cuda; 
    cudaGraphExecKernelNodeSetParams(_graph_exec, _node_list[0], &_dynamic_params_updated_cuda); 
  } 
} cudaStream_t stream;
std::vector _node_list;
cudaGraphExec_t _graph_exec;

if (not using_graph) {
  
  first_static_kernel>>(static_parameters);
  second_static_kernel>>(static_parameters);
  dynamic_kernel>>(dynamic_parameters);

} else {

  if (capturing_graph) {

    cudaStreamBeginCapture(stream, cudaStreamCaptureModeGlobal);

    first_static_kernel>>(static_parameters);
    second_static_kernel>>(static_parameters);

    // Get the current stream capturing graph
    cudaGraph_t _capturing_graph;
    cudaStreamCaptureStatus _capture_status;
    const cudaGraphNode_t *_deps;
    size_t _dep_count;
    cudaStreamGetCaptureInfo_v2(stream, &_capture_status, nullptr &_capturing_graph, &_deps, &_dep_count);

    // Manually add a new kernel node
    cudaGraphNode_t new_node;
    cudakernelNodeParams _dynamic_params_cuda;
    cudaGraphAddKernelNode(&new_node, _capturing_graph, _deps, _dep_count, &_dynamic_params_cuda);
    // ... and store the new node for future references
    _node_list.push_back(new_node);

    // Update the stream dependencies
    cudaStreamUpdateCaptureDependencies(stream, &new_node, 1, 1); 

    // End the capture and instantiate the graph
    cudaGraph_t _captured_graph;
    cudaStreamEndCapture(stream, &_captured_graph);

    cudaGraphInstantiate(&_graph_exec, _captured_graph, nullptr, nullptr, 0);

  } else if (updating_graph) {
    cudakernelNodeParams _dynamic_params_updated_cuda;
  
    cudaGraphExecKernelNodeSetParams(_graph_exec, _node_list[0], &_dynamic_params_updated_cuda);

  }
}
cudaStream_t stream; 
std::vector _node_list; 
cudaGraphExec_t _graph_exec; 
if (not using_graph) { 
  first_static_kernel>>(static_parameters); 
  second_static_kernel>>(static_parameters); 
  dynamic_kernel>>(dynamic_parameters); 
} else { 
  if (capturing_graph) { 
    cudaStreamBeginCapture(stream, cudaStreamCaptureModeGlobal); 
    first_static_kernel>>(static_parameters); 
    second_static_kernel>>(static_parameters); 

    // Get the current stream capturing graph 

    cudaGraph_t _capturing_graph; 
    cudaStreamCaptureStatus _capture_status; 
    const cudaGraphNode_t *_deps; 
    size_t _dep_count; 
    cudaStreamGetCaptureInfo_v2(stream, &_capture_status, nullptr &_capturing_graph, &_deps, &_dep_count);  

    // Manually add a new kernel node 

    cudaGraphNode_t new_node; 
    cudakernelNodeParams _dynamic_params_cuda; 
    cudaGraphAddKernelNode(&new_node, _capturing_graph, _deps, _dep_count, &_dynamic_params_cuda); 

    // ... and store the new node for future references 

    _node_list.push_back(new_node);  

    // Update the stream dependencies 

    cudaStreamUpdateCaptureDependencies(stream, &new_node, 1, 1); 

    // End the capture and instantiate the graph 

    cudaGraph_t _captured_graph; 
    cudaStreamEndCapture(stream, &_captured_graph);
    cudaGraphInstantiate(&_graph_exec, _captured_graph, nullptr, nullptr, 0); 
  } else if (updating_graph) { 
    cudakernelNodeParams _dynamic_params_updated_cuda; 
    cudaGraphExecKernelNodeSetParams(_graph_exec, _node_list[0], &_dynamic_params_updated_cuda); 
  } 
} 

In this example, cudaStreamGetCaptureInfo_v2 extracts the CUDA graph that is currently being recorded and captured into. A kernel node is added to this graph with the node handle (new_node) returned and stored, before cudaStreamUpdateCaptureDependencies is called to update the dependency tree of the current capturing stream. The last step is necessary to ensure that any other activities captured afterward have their dependencies set on these manually added nodes correctly.

With this approach, the same instantiated graph (cudaGraphExec_t object) can be reused directly with a lightweight cudaGraphExecKernelNodeSetParams call, even though the parameters are dynamic. The first image in this post shows this usage.

Furthermore, the capture and update code paths can be combined into one piece of code that lives next to the original code that launches the last two kernels. This inflicts a minimal number of code changes and does not break the original control flow and function call structure.

The new approach is shown in detail in the hummingtree/cuda-graph-with-dynamic-parameters standalone code example. cudaStreamGetCaptureInfo_v2 and cudaStreamUpdateCaptureDependencies are new CUDA runtime APIs introduced in CUDA 11.3.

Performance results

Using the hummingtree/cuda-graph-with-dynamic-parameters standalone code example, I measured the performance of running the same dynamic workflow that is bound by kernel launch overhead with three different approaches:

  • Running without CUDA graph acceleration
  • Running CUDA graph with the recapture-then-update approach
  • Running CUDA graph with the combined approach introduced in this post

Table 1 shows the results. The speedup from the approaches mentioned in this post strongly depends on the underlying workflow.

Approach Time Speedup over no graph
Combined 433 ms 1.63
Recapture-then-update 580 ms 1.22
No CUDA Graph 706 ms 1.00
Table 1. Performance results of running on an A100-40GB GPU and Intel Xeon Silver 4110 CPU at 2.10GHz

Conclusion

In this post, I introduced an approach to constructing CUDA graphs that combines both the explicit API and stream capture methods. It provides a way to reuse instantiated graphs for workflows with dynamic parameters at minimal cost.

In addition to the CUDA technical posts mentioned earlier, the CUDA Graph section of the CUDA Programming Guide provides a comprehensive introduction to CUDA Graphs and its usages. For useful tips on employing CUDA Graphs in various applications, see the Nearly Effortless CUDA Graphs GTC session.

Categories
Offsites

UVQ: Measuring YouTube’s Perceptual Video Quality

Online video sharing platforms, like YouTube, need to understand perceptual video quality (i.e., a user’s subjective perception of video quality) in order to better optimize and improve user experience. Video quality assessment (VQA) attempts to build a bridge between video signals and perceptual quality by using objective mathematical models to approximate the subjective opinions of users. Traditional video quality metrics, like peak signal-to-noise ratio (PSNR) and Video Multi-Method Assessment Fusion (VMAF), are reference-based and focus on the relative difference between the target and reference videos. Such metrics, which work best on professionally generated content (e.g., movies), assume the reference video is of pristine quality and that one can induce the target video’s absolute quality from the relative difference.

However, the majority of the videos that are uploaded on YouTube are user-generated content (UGC), which bring new challenges due to their remarkably high variability in video content and original quality. Most UGC uploads are non-pristine and the same amount of relative difference could imply very different perceptual quality impacts. For example, people tend to be less sensitive to the distortions of poor quality uploads than of high quality uploads. Thus, reference-based quality scores become inaccurate and inconsistent when used for UGC cases. Additionally, despite the high volume of UGC, there are currently limited UGC video quality assessment (UGC-VQA) datasets with quality labels. Existing UGC-VQA datasets are either small in size (e.g., LIVE-Qualcomm has 208 samples captured from 54 unique scenes), compared with datasets with millions of samples for classification and recognition (e.g., ImageNet and YouTube-8M), or don’t have enough content variability (sampling without considering content information, like LIVE-VQC and KoNViD-1k).

In “Rich Features for Perceptual Quality Assessment of UGC Videos“, published at CVPR 2021, we describe how we attempt to solve the UGC quality assessment problem by building a Universal Video Quality (UVQ) model that resembles a subjective quality assessment. The UVQ model uses subnetworks to analyze UGC quality from high-level semantic information to low-level pixel distortions, and provides a reliable quality score with rationale (leveraging comprehensive and interpretable quality labels). Moreover, to advance UGC-VQA and compression research, we enhance the open-sourced YouTube-UGC dataset, which contains 1.5K representative UGC samples from millions of UGC videos (distributed under the Creative Commons license) on YouTube. The updated dataset contains ground-truth labels for both original videos and corresponding transcoded versions, enabling us to better understand the relationship between video content and its perceptual quality.

Subjective Video Quality Assessment
To understand perceptual video quality, we leverage an internal crowd-sourcing platform to collect mean opinion scores (MOS) with a scale of 1–5, where 1 is the lowest quality and 5 is the highest quality, for no-reference use cases. We collect ground-truth labels from the YouTube-UGC dataset and categorize UGC factors that affect quality perception into three high-level categories: (1) content, (2) distortions, and (3) compression. For example, a video with no meaningful content won’t receive a high quality MOS. Also, distortions introduced during the video production phase and video compression artifacts introduced by third-party platforms, e.g., transcoding or transmission, will degrade the overall quality.

MOS= 2.052 MOS= 4.457
Left: A video with no meaningful content won’t receive a high quality MOS. Right: A video displaying intense sports shows a higher MOS.
MOS= 1.242 MOS= 4.522
Left: A blurry gaming video gets a very low quality MOS. Right: A video with professional rendering (high contrast and sharp edges, usually introduced in the video production phase) shows a high quality MOS.
MOS= 2.372 MOS= 4.646
Left: A heavily compressed video receives a low quality MOS. Right: a video without compression artifacts shows a high quality MOS.

We demonstrate that the left gaming video in the second row of the figure above has the lowest MOS (1.2), even lower than the video with no meaningful content. A possible explanation is that viewers may have higher video quality expectations for videos that have a clear narrative structure, like gaming videos, and the blur artifacts significantly reduce the perceptual quality of the video.

UVQ Model Framework
A common method for evaluating video quality is to design sophisticated features, and then map these features to a MOS. However, designing useful handcrafted features is difficult and time-consuming, even for domain experts. Also, the most useful existing handcrafted features were summarized from limited samples, which may not perform well on broader UGC cases. In contrast, machine learning is becoming more prominent in UGC-VQA because it can automatically learn features from large-scale samples.

A straightforward approach is to train a model from scratch on existing UGC quality datasets. However, this may not be feasible as there are limited quality UGC datasets. To overcome this limitation, we apply a self-supervised learning step to the UVQ model during training. This self-supervised step enables us to learn comprehensive quality-related features, without ground-truth MOS, from millions of raw videos.

Following the quality-related categories summarized from the subjective VQA, we develop the UVQ model with four novel subnetworks. The first three subnetworks, which we call ContentNet, DistortionNet and CompressionNet, are used to extract quality features (i.e., content, distortion and compression), and the fourth subnetwork, called AggregationNet, maps the extracted features to generate a single quality score. ContentNet is trained in a supervised learning fashion with UGC-specific content labels that are generated by the YouTube-8M model. DistortionNet is trained to detect common distortions, e.g., Gaussian blur and white noise of the original frame. CompressionNet focuses on video compression artifacts, whose training data are videos compressed with different bitrates. CompressionNet is trained using two compressed variants of the same content that are fed into the model to predict corresponding compression levels (with a higher score for more noticeable compression artifacts), with the implicit assumption that the higher bitrate version has a lower compression level.

The ContentNet, DistortionNet and CompressionNet subnetworks are trained on large-scale samples without ground-truth quality scores. Since video resolution is also an important quality factor, the resolution-sensitive subnetworks (CompressionNet and DistortionNet) are patch-based (i.e., each input frame is divided into multiple disjointed patches that are processed separately), which makes it possible to capture all detail on native resolution without downscaling. The three subnetworks extract quality features that are then concatenated by the fourth subnetwork, AggregationNet, to predict quality scores with domain ground-truth MOS from YouTube-UGC.

The UVQ training framework.

Analyzing Video Quality with UVQ
After building the UVQ model, we use it to analyze the video quality of samples pulled from YouTube-UGC and demonstrate that its subnetworks can provide a single quality score along with high-level quality indicators that can help us understand quality issues. For example, DistortionNet detects multiple visual artifacts, e.g., jitter and lens blur, for the middle video below, and CompressionNet detects that the bottom video has been heavily compressed.

ContentNet assigns content labels with corresponding probabilities in parentheses, i.e., car (0.58), vehicle (0.42), sports car (0.32), motorsports (0.18), racing (0.11).
DistortionNet detects and categorizes multiple visual distortions with corresponding probabilities in parentheses, i.e., jitter (0.112), color quantization (0.111), lens blur (0.108), denoise (0.107).
CompressionNet detects a high compression level of 0.892 for the video above.

Additionally, UVQ can provide patch-based feedback to locate quality issues. Below, UVQ reports that the quality of the first patch (patch at time t = 1) is good with a low compression level. However, the model identifies heavy compression artifacts in the next patch (patch at time t = 2).

Patch at time t = 1 Patch at time t = 2
Compression level = 0.000 Compression level = 0.904
UVQ detects a sudden quality degradation (high compression level) for a local patch.

In practice, UVQ can generate a video diagnostic report that includes a content description (e.g., strategy video game), distortion analysis (e.g., the video is blurry or pixelated) and compression level (e.g., low or high compression). Below, UVQ reports that the content quality, looking at individual features, is good, but the compression and distortion quality is low. When combining all three features, the overall quality is medium-low. We see that these findings are close to the rationale summarized by internal user experts, demonstrating that UVQ can reason through quality assessments, while providing a single quality score.

UVQ diagnostic report. ContentNet (CT): Video game, strategy video game, World of Warcraft, etc. DistortionNet (DT): multiplicative noise, Gaussian blur, color saturation, pixelate, etc. CompressionNet (CP): 0.559 (medium-high compression). Predicted quality score in [1, 5]: (CT, DT, CP) = (3.901, 3.216, 3.151), (CT+DT+CP) = 3.149 (medium-low quality).

Conclusion
We present the UVQ model, which generates a report with quality scores and insights that can be used to interpret UGC video perceptual quality. UVQ learns comprehensive quality related features from millions of UGC videos and provides a consistent view of quality interpretation for both no-reference and reference cases. To learn more, read our paper or visit our website to see YT-UGC videos and their subjective quality data. We also hope that the enhanced YouTube-UGC dataset enables more research in this space.

Acknowledgements
This work was possible through a collaboration spanning several Google teams. Key contributors include: Balu Adsumilli, Neil Birkbeck, Joong Gon Yim from YouTube and Junjie Ke, Hossein Talebi, Peyman Milanfar from Google Research. Thanks to Ross Wolf, Jayaprasanna Jayaraman, Carena Church, and Jessie Lin for their contributions.

Categories
Misc

Learn How Leading Companies Are Building AI Centers of Excellence, at NVIDIA GTC

AI Centers of Excellence are organizational units dedicated to implementing a company-wide AI vision. They help identify business use cases, create an implementation roadmap, accelerate adoption, assess impact and more. NVIDIA GTC, a global conference on AI and the metaverse, brings together the world’s top business and technology leaders who’ve embraced artificial intelligence to transform Read article >

The post Learn How Leading Companies Are Building AI Centers of Excellence, at NVIDIA GTC appeared first on NVIDIA Blog.

Categories
Misc

Explore the Future of Robotics at GTC 2022

Discover the latest innovations in AI and robotics, and hear world-renowned roboticist Dr. Henrik Christensen talk about the future of robotics.

Discover the latest innovations in AI and robotics, and hear world-renowned roboticist Dr. Henrik Christensen talk about the future of robotics.

Categories
Misc

Predict, Detect, Mitigate: AI for Climate Science Takes the Stage at NVIDIA GTC

Recent AI advances enable modeling of weather forecasting 4-5 magnitudes faster than traditional computing methods. The brightest leaders, researchers and developers in climate science, high performance computing and AI will discuss such technology breakthroughs — and how they can help foster a greener Earth — at NVIDIA GTC. The virtual conference, running Sept. 19-22, also Read article >

The post Predict, Detect, Mitigate: AI for Climate Science Takes the Stage at NVIDIA GTC appeared first on NVIDIA Blog.

Categories
Misc

Shelter From the Storm: AI Helps Gauge Catastrophe Risks

Floods in Kentucky and wildfires in California are the kinds of disasters companies of all sorts are trying to address with AI. Tom Rikert, co-founder and CEO of San Francisco-based startup Masterful AI, is one of many experts helping them manage catastrophe risk. In the U.S. alone, the National Association of Insurance Commissioners estimates that Read article >

The post Shelter From the Storm: AI Helps Gauge Catastrophe Risks appeared first on NVIDIA Blog.

Categories
Misc

3D Artists Reimagine, Remaster Iconic European Architecture This Week ‘In the NVIDIA Studio’

A triple threat steps In the NVIDIA Studio this week: a tantalizing trio of talented 3D artists who each reimagined and remastered classic European buildings with individualistic flair.

The post 3D Artists Reimagine, Remaster Iconic European Architecture This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Categories
Misc

Leveraging AI Music with NVIDIA DGX-2

Language models such as the NVIDIA Megatron-LM and OpenAI GPT-2 and GPT-3 have been used to enhance human productivity and creativity. Specifically, these…

Language models such as the NVIDIA Megatron-LM and OpenAI GPT-2 and GPT-3 have been used to enhance human productivity and creativity. Specifically, these models have been used as powerful tools for writing, programming, and painting. The same architecture can be used for music composition.

Large datasets are required to use language models in these domains. Starting with 50 GB of uncompressed text files for language generation is no surprise. This implies the need for a log of GPU compute to train the models effectively for rapid development, prototyping, and iteration.

This post provides an account of a series of experiments performed in the field of AI music using the NVIDIA DGX-2 platform. DGX-2 boosted progress significantly in both data preprocessing and training language models.

Datasets for AI music

There are two major classes when it comes to datasets for Computational Music. One approach involves training on the music represented as pure audio (WAV files or MP3s). The second approach does not work with the pure audio. Instead, you map anything that resembles sheet music to a token representation. 

Usually, this requires tokens for which note starts (C, D, E, F, G), how much time passes (quarter notes or eighth notes, for example), and which note ends. In research and application, MIDI-files have proven to be fruitful sources for musical material. The MIDI standard has been designed to electronically store music information.

These experiments used several sets of MIDI files, including:

Video 1. AI music composed using a GPT trained on the MetaMIDI Dataset

The MIDI format is a non-human-readable representation of music which, in order to train a Causal Language Model, has to be mapped to a readable token representation. For this representation, we took inspiration from the mmmtrack encoding

This encoding represents pieces of music as a hierarchy. A piece of music consists of different tracks for different instruments: drums, guitars, bass, and piano, for example. Each track consists of several bars (4, 8, or 16 bars, depending on the use case). And each bar holds a sequence of note-on, time-delta, and note-off events. Although this hierarchy can be considered a tree, it is possible to encode everything as a linear sequence, making it an ideal representation for decoder-only language models.

The example below is a four-part chorale in its piano roll representation. A chorale features four voices: soprano, alto, tenor, and bass. Soprano and alto are female voices, and tenor and bass are male voices. Usually, all four voices sing at the same time but with different, harmonic pitches. 

Figure 1 visualizes the voices with pitch color coding. The soprano is green, the alto is orange, the tenor is blue, and the bass is red. You can encode these musical events—which have both a time and a pitch dimension—to a sequence of tokens.

Graph visualization of music tokens generated with the Music GPT model. The music tokens are pitch color-coded.
Figure 1. A sample of generated music tokens visualized with pitch color coding

Following the mmmtrack encoding, the bass part would be mapped to the following token representation:

PIECE_START TRACK_START INST=BASS BAR_START NOTE_ON=61 TIME_DELTA=4 NOTE_OFF=61 NOTE_ON=59 TIME_DELTA=2 NOTE_OFF=59 NOTE_ON=58 TIME_DELTA=2 NOTE_OFF=58 NOTE_ON=56 TIME_DELTA=4 NOTE_OFF=56 NOTE_ON=54 TIME_DELTA=4 NOTE_OFF=54 BAR_END BAR_START NOTE_ON=59 TIME_DELTA=2 NOTE_OFF=59 NOTE_ON=58 TIME_DELTA=2 NOTE_OFF=58 NOTE_ON=56 TIME_DELTA=4 NOTE_OFF=56 NOTE_ON=58 TIME_DELTA=4 NOTE_OFF=58 NOTE_ON=59 TIME_DELTA=4 NOTE_OFF=59 BAR_END BAR_START NOTE_ON=58 TIME_DELTA=4 NOTE_OFF=58 NOTE_ON=59 TIME_DELTA=2 NOTE_OFF=59 NOTE_ON=61 TIME_DELTA=2 NOTE_OFF=61 NOTE_ON=63 TIME_DELTA=2 NOTE_OFF=63 NOTE_ON=52 TIME_DELTA=2 NOTE_OFF=52 NOTE_ON=54 TIME_DELTA=4 NOTE_OFF=54 BAR_END BAR_START NOTE_ON=47 TIME_DELTA=4 NOTE_OFF=47 NOTE_ON=49 TIME_DELTA=2 NOTE_OFF=49 NOTE_ON=51 TIME_DELTA=2 NOTE_OFF=51 NOTE_ON=52 TIME_DELTA=2 NOTE_OFF=52 NOTE_ON=54 TIME_DELTA=2 NOTE_OFF=54 NOTE_ON=56 TIME_DELTA=4 NOTE_OFF=56 BAR_END TRACK_END TRACK_START INST=TENOR …

With a little practice, humans can read and understand this representation. The representation starts with PIECE_START indicating the start of a piece of music. TRACK_START indicates the beginning and TRACK_END the end of a track (or instrument or voice). The INST=BASS token denotes that this track contains the bass voice. Other voices are represented the same way. BAR_START and BAR_END represent the beginning and the end of a bar, respectively. NOTE_ON=61 is the start of a note with pitch 61. 

On the piano, this would be the note C#5. TIME_DELTA=4 means that a duration of four sixteenth notes would elapse. That would be a quarter note. After that, the note would end, represented by NOTE_OFF=61. And so on and so forth. At this point, this notation would also allow for polyphony. Several tracks would sound notes at the same time, and each track could have parallel notes. This makes the encoding universal.

Each piece of music differs in the number of bars. It is quite possible that encoding an entire song would require a long sequence length, making the training of a respective Transformer computationally expensive. These experiments encode most of the datasets with four bars and a few with eight bars. Experiments with 16 bars are underway. In addition, only music in a 4/4 time meter was used. This covers the better part of western music. Other meters such as 3/4 (waltz) can be the subject of future work.

This sequence of different experiments mapped many MIDI datasets to the described token format. The same preprocessor was used throughout. Once the preprocessor worked with small datasets, it immediately worked with larger ones. 

The processing time depends on the number of MIDI files to be encoded, ranging from a few minutes to many hours. The longest preprocessing took 30 hours on DGX-2 running on all 96 CPUs in parallel. It is estimated that this would take about 10-14 days of processing on a state-of-the-art MacBook Pro.

Graph of music datasets (MIDI files) used for training the Music GPT models in bar chart sorted from the largest datasets to the smallest
Figure 2. Music datasets used for training the GPT models

Encoding a dataset of MIDI files would yield a collection of token files. The size of those token files depends on the number of MIDI files and the number of bars. Consider some of the experiment datasets and their encoded dataset sizes:

  • JS Fake Chorales Dataset: 14 MB with four bars per sample
  • The Lakh MIDI Dataset: 72 GB, its Clean subset 19 GB with four bars per sample
  • The MetaMIDI Dataset: 130 GB with four bars and 230 GB with eight bars per sample

You can imagine that training on the 14 MB of JS Fake Chorales would take just a few hours. Training on the MetaMIDI Dataset with its 130 GB would take many days. Training for these experiments lasted between 10 and 15 days.

Model training

Many models were trained using the HuggingFace GPT-2 implementation. A few models were trained using the NVIDIA Megatron-LM in GPT-2 mode. 

Training with HuggingFace boiled down to uploading the dataset to the DGX-2 and then running a training script that contained all functionality, including the model and training parameters. The same script was used, with just a few changes here and there for all our datasets. It was just a matter of scale.

For Megatron-LM, the environment setup is as easy as pulling and running an NGC PyTorch Docker container, then getting to work immediately with a Jupyter notebook in the browser through ssh tunneling into the DGX-2 machine.

Most of the experiments used the same GPT-2 architecture:  six decoder-blocks and eight attention heads; the embedding size was 512, and the sequence length was 2048. Although this is definitely not a Large Language Model (LLM), which can have around 100 decoder blocks, subjective evaluation showed that for AI music this architecture works like a charm.

Using the NVIDIA DGX-2 really made a difference in rapid iteration. Datasets that would train for multiple days on a single GPU, would train for just a few hours on DGX-2. Datasets that would train for months on a single GPU, finished training after two weeks maximum on DGX-2. Especially for experiments with datasets

Training times for some of the datasets were as follows:

  • The Lakh MIDI Clean Dataset took 15 hours for 10 epochs and roughly 15K songs
  • The Lakh MIDI Dataset took 130 hours for 10 epochs and roughly 175K songs
  • The MetaMIDI Dataset took 290 hours for 9 epochs and roughly 400K songs

Note that the JS Fake Chorales dataset was trained earlier and not on the DGX-2. Due to its very small size, it was not necessary to use a multi-GPU setup. It could even be trained overnight on a MacBook Pro.

NVIDIA DGX-2

This section provides a closer look at the NVIDIA DGX-2 specifications. As mentioned above, the platform is very effective, both when it comes to accelerated dataset preprocessing, and when it comes to training language models. This section will be a delightfully technical one.

A picture of a DGX-2 station
Figure 3. DGX-2 station 

NVIDIA DGX-2 is a powerful system with 16 fully connected Tesla V100 32 GB GPUs using NVSwitch. It is capable of delivering 2.4 TB/sec of bisection bandwidth. DGX-2 has been designed for AI researchers and data scientists who need both performance and scalability. 

For transformer models, NVIDIA DGX-2 is able to deliver up to 517,227 tokens per second throughput with mixed precision, making it especially powerful.

A table showing multi-GPU performance information for the NVIDIA DGX-2 station, specifically on throughput for Floating Point 32 / mixed precision varying number of GPUs and batch sizes.
Table 1. Multi-GPU performance table for DGX-2

Software framework: NVIDIA Megatron-LM

To get the most out of powerful compute, you need stable and optimized software. With a performance-optimized framework such as NVIDIA Megatron-LM, performance is scaled almost linearly as the GPT model sizes are scaled. For related information, see Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism.

A baseline is achieved by training a model of 1.2 billion parameters on a single NVIDIA V100 32 GB GPU, that sustains 39 teraflops. This is 30% of the theoretical peak flops for a single GPU as configured in a DGX-2H server, and is thus a strong baseline. 

Scaling the model to 8.3 billion parameters on 512 GPUs with 8-way model parallelism achieved up to 15.1 petaflops per second sustained over the entire application. This is 76% scaling efficiency compared to the single GPU case.

A plot with achieved petaflops on the y-axis and number of GPUs used on the x-axis to demonstrate near linear scaling performance with Megatron-LM up to thousands of GPUs as we scale model sizes up to 1 trillion parameters.
Figure 4. Scaling to thousands of GPUs with NVIDIA Megatron-LM, without losing performance

By fixing the seq_len, short-hands, equal to 4,096, and modifying training configurations and launch training runs with only a few iterations, it is possible to calculate the teraflop percent achieved in real application job runs.

After a native run, both the nvidia-smi as well as the output Nsight profile were analyzed. Different configurations were tested to obtain the highest possible teraflop, as the below table illustrates:

Table showing Teraflops calculation using the third equation presented in Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
Table 2. Teraflops calculation using the third equation presented in Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM

The training configuration presented in the last row of the table delivered the highest teraflop of 45.45%. 

Note that eight V100 32 GB GPUs were used instead of 16 GPUs to shorten the time it takes to run each profiling job. The nvidia-smi command was used to verify with the training config that achieved 45.45% teraflops utilization, as illustrated below. 

Training performance was interactively monitored through the use of nvidia-smi commands
Figure 5. Training performance was interactively monitored through the use of nvidia-smi commands

Summary

The AI music experiments presented here were performed using the NVIDIA DGX-2. We trained language models using datasets ranging from just a few megabytes in size to 230 GB. We used the HuggingFace GPT-2 implementation and showed that NVIDIA Megatron-LM is also a great alternative for experimentation.

NVIDIA DGX-2 made a significant difference in accelerating dataset preprocessing—mapping MIDI files to a token representation—and training models. This allowed for rapid experimentation. DGX-2 worked like a charm when it came to training the largest MIDI dataset available (MetaMIDI with 400K files).

Categories
Misc

DLI Courses: Enhance Your Skills with Hands-On Training at GTC

Select from 20 hands-on workshops, offered at GTC, available in multiple languages and time zones. Early bird pricing of just $99 ends Aug 29 (regular $500).

Select from 20 hands-on workshops, offered at GTC, available in multiple languages and time zones. Early bird pricing of just $99 ends Aug 29 (regular $500).