Categories
Misc

Free Ray Tracing Gems II Chapter Covers Ray Tracing in Remedy’s Control

This chapter, written by Juha Sjöholm, Paula Jukarainen, and Tatu Aalto, presents how all ray tracing based effects were implemented in Remedy Entertainment’s Control.

Next week, Ray Tracing Gems II will finally be available in its entirety as a free download, or for purchase as a physical release from Apress or Amazon. Since the start of July, we’ve been providing early access to a new chapter every week. Today’s chapter, by Juha Sjöholm, Paula Jukarainen, and Tatu Aalto, presents how all ray tracing based effects were implemented in Remedy Entertainment’s Control. This includes opaque and transparent reflections, near field indirect diffuse illumination, contact shadows, and the denoisers tailored for these effects.

You can download the full chapter free here

You can also learn more about Game of the Year Winner Control here

We’ve collaborated with our partners to make limited edition versions of the book, including custom covers that highlight real-time ray tracing in Fortnite, Control, and Watch Dogs: Legion.

To win a limited edition print copy of Ray Tracing Gems II, enter the giveaway contest here: https://developer.nvidia.com/ray-tracing-gems-ii

Categories
Misc

GPU Accelerating Node.js JavaScript for Visualization and Beyond

NVIDIA GTC21 had numerous great and engaging contents, especially around RAPIDS, so it would be easy to miss our debut presentation “Using RAPIDS to Accelerate Node.js JavaScript for Visualization and Beyond.” Yep – we are bringing the power of GPU accelerated data science to the JavaScript Node.js community with the Node-RAPIDS project. Node-RAPIDS is an … Continued

NVIDIA GTC21 had numerous great and engaging contents, especially around RAPIDS, so it would be easy to miss our debut presentation “Using RAPIDS to Accelerate Node.js JavaScript for Visualization and Beyond.” Yep – we are bringing the power of GPU accelerated data science to the JavaScript Node.js community with the Node-RAPIDS project.

Node-RAPIDS is an open-source, technical preview of modular RAPIDS’ library bindings in Node.js, as well as, complementary methods for enabling high-performance, browser-based visualizations.

Venn diagram showing intersections of Performance, Reach, and Usability, highlighting the intersection of all three.

What’s the problem with web viz?

Around a decade ago, the mini-renaissance around web-based data visualization showed the benefits of highly interactive, easy to share, and use tools such as D3. While not as performant as C/C++ or Python frameworks, their popularity took off because of JavaScript’s accessibility. No surprise that it often ranks as the most popular developer language, preceding Python or Java, and there is now a full catalog of visualization and data tools.

Yet, this large JavaScript community of developers is impeded by the lack of first-class and accelerated data tools in their preferred language. The analysis is most effective when it is paired as close as possible to its data source, science, and visualizations. To fully access GPU hardware with JavaScript, (beyond webGL limitations and hacks) requires being a polyglot to set up complicated middleware plumbing or use non-js frameworks like Plotly Dash. As a result, data engineers, data scientists, visualization specialists, and front-end developers are often siloed, even within organizations. This is detrimental because data visualization is the ideal medium of communication between these groups.

As for the RAPIDS Viz team, ever since our first proof of concept, we’ve wanted to build tools that can more seamlessly interact with hundreds of millions of data points in real-time through our browsers – and we finally have a way.

Why Node.js

If you are not familiar with Node.js, it is an open-source, cross-platform runtime environment based on C/C++ that executes JavaScript code outside of a web browser. Over 1 Million Node.js downloads occur per day. Node Package Manager (NPM) is the default JavaScript package manager and Microsoft owns it. Node.js is used in the backend of online marketplaces like eBay, AliExpress, and is used by high-traffic websites, such as Netflix, PayPal, and Groupon. Clearly, it is a powerful framework.

 XKCD
Figure 1: XKCD – Node.js is a Universal Connector.

Node.js is the connector that gives us JavaScript with direct access to hardware, which results in a streamlined API and the ability to use NVIDIA CUDA ⚡. By creating node-rapids bindings, we enable a massive developer community with the ability to use GPU acceleration without the need to learn a new language or work in a new environment. We also give the same community access to a high-performance data science platform: RAPIDS!

Here is a snippet of node-RAPIDS in action based on our basic notebook, which shows a 6x speedup for a small regex example: 

Node-RAPIDS: designed as building blocks

 Notional diagram of node-rapids module structure, showing components for memory management (cuda, rmm), data science  (cudf, cuml, cugraph, cuspatial), graphics, streaming, and front end.
Figure 2: Node-RAPIDS module overview.

Similar to node projects, Node-RAPIDS is designed to be modular. Our aim is not to build turnkey web applications, but to create an inventory of functionality that enables or accelerates a wide variety of use cases and pipelines. The preceding is an overview of the current and planned Node-RAPIDS modules grouped in general categories. A Node-RAPIDS application can use as many or as few of the modules as needed.

To make starting out less daunting, we are also building a catalog of demos that can serve as templates for generalized applications. As we develop more bindings, we will create more demos to showcase their capabilities.

Notional architecture diagram showing a GPU-accelerated node.js application for crossfiltering, made possible using node-rapids.
Figure 3: Example of a Cross Filter App.

The preceding is an idealized stack of a geospatial cross filter dashboard application using RAPIDS cuDF and RAPIDS cuSpatial libraries. We have a simple demo using Deck.gl that you can preview with our video and explore demo code on Github. 

Figure 4: Example of Streaming ETL Process.

The last example preceding is a server-side only ETL pipeline without any visualization. We have an example of a straightforward ETL process using cuDF bindings and the nteract notebook desktop application, which you can preview with our video and nteract with (get it) on our Notebook.

What is next?

While we have been thinking about this project for a while, we are just getting started in development. RAPIDS is an incredible framework, and we want to bring it to more people and more applications – RAPIDS everywhere as we say.

Near-term next steps:

  • Some short-term next steps are to continue building core RAPIDS binding features, which you can check out on our current binding coverage table.
  • If the idea of GPU accelerated SQL queries straight from your web app sounds interesting (it does to us), we hope to get started on some blazingSQL bindings soon too.
  • And most noteworthy, we plan to start creating and publishing modular docker containers, which will dramatically simplify the current from-source tech preview installation process.

As always, we need community engagement to help guide us. If you have feature requests, questions, or use cases, you can reach out to us!

This project has numerous potentials. It can accelerate a wide variety of Node.js applications, as well as bring first-class, high-performance data science and visualization tools to a huge community. We hope you join us at the beginning of this exciting project.

Resources:

Categories
Misc

Developing a Question Answering Application Quickly Using NVIDIA Riva

There is a high chance that you have asked your smart speaker a question like, “How tall is Mount Everest?” If you did, it probably said, “Mount Everest is 29,032 feet above sea level.” Have you ever wondered how it found an answer for you? Question answering (QA) is loosely defined as a system consisting … Continued

There is a high chance that you have asked your smart speaker a question like, “How tall is Mount Everest?” If you did, it probably said, “Mount Everest is 29,032 feet above sea level.” Have you ever wondered how it found an answer for you?

Question answering (QA) is loosely defined as a system consisting of information retrieval (IR) and natural language processing (NLP), which is concerned with answering questions posed by humans in a natural language. If you are not familiar with information retrieval, it is a technique to obtain relevant information to a query, from a pool of resources, webpages, or documents in the database, for example. The easiest way to understand the concept is the search engine that you use daily. 

You then need an NLP system to find an answer within the IR system that is relevant to the query. Although I just listed what you need for building a QA system, it is not a trivial task to build IR and NLP from scratch. Here’s how NVIDIA Riva makes it easy to develop a QA system.

Riva overview

NVIDIA Riva is an accelerated SDK for building multimodal conversational AI services that use an end-to-end deep learning pipeline. The Riva framework includes optimized services for speech, vision, and natural language understanding (NLU) tasks. In addition to providing several pretrained models for the entire pipeline of your conversational AI service, Riva is also architected for deployment at scale. In this post, I look closely into the QA function of Riva and how you can create your own QA application with it.

Riva QA function

To understand how the Riva QA function works, start with Bidirectional Encoder Representations from Transformers (BERT). It’s a transformer-based, NLP, pretraining method developed by Google in 2018, and it completely changed the field of NLP. BERT understands the contextual representation of a given word in a text. It is pretrained on a large corpus of data, including Wikipedia. 

With the pretrained BERT, a strong NLP engine, you can further fine-tune it to perform QA with many question-answer pairs like those in the Stanford Question Answering Dataset (SQuAD). The model can now find an answer for a question in natural language from a given context: sentences or paragraphs. Figure 1 shows an example of QA, where it highlights the word “gravity” as an answer to the query, “What causes precipitation to fall?”. In this example, the paragraph is the context and the successfully fine-tuned QA model returns the word “gravity” as an answer.

A paragraph describing the meteorological explanation of precipitation. It has three pairs of questions and answers at the bottom.
Figure 1. Question-answer pairs for a sample passage in the SQuAD dataset.
Source: SQuAD: 100,000+ Questions for Machine Comprehension of Text.

Create a QA system with Riva

Teams of engineers and researchers at NVIDIA deliver a quality QA function that you can use right out-of-the-box with Riva. The Riva NLP service provides a set of high-level API actions that include QA, NaturalQuery. The Wikipedia API action allows you to fetch articles posted on Wikipedia, an online encyclopedia, with a query in natural language. That’s the information retrieval system that I discussed earlier. Combining the Wikipedia API action and Riva QA function, you can create a simple QA system with a few lines of Python code. 

Start by installing the Wikipedia API for Python. Next, import the Riva NLP service API and gRPC, the underlying communication framework for Riva.

!pip install wikipedia
import wikipedia as wiki
import grpc
import riva_api.riva_nlp_pb2 as rnlp
import riva_api.riva_nlp_pb2_grpc as rnlp_srv

Now, create an input query. Use the Wikipedia API action to fetch the relevant articles and define the number of them to fetch, defined as max_articles_combine. Ask a question, “What is speech recognition?” You then print out the titles of the articles returned from the search. Finally, you add the summaries of each article into a variable: combined_summary.

input_query = "What is speech recognition?"
wiki_articles = wiki.search(input_query)
max_articles_combine = 3
combined_summary = ""
if len(wiki_articles) == 0:
    print("ERROR: Could not find any matching results in Wikipedia.")
else:
    for article in wiki_articles[:min(len(wiki_articles), max_articles_combine)]:
        print(f"Getting summary for: {article}")
        combined_summary += "n" + wiki.summary(article)
The figure shows the printed output of the Python code run, three articles related to speech recognition.
Figure 2. Titles of articles fetched by Wikipedia API action.

Next, open a gRPC channel that points to the location where the Riva server is running. Because you are running the Riva server locally, it is ‘localhost:50051‘. Then, instantiate NaturalQueryRequest, and send a request to the Riva server, passing both the query and the context. Finally, print the response, returned from the Riva server.

channel = grpc.insecure_channel('localhost:50051')
riva_nlp = rnlp_srv.RivaLanguageUnderstandingStub(channel)req = rnlp.NaturalQueryRequest()
req.query = input_query
req.context = combined_summary
resp = riva_nlp.NaturalQuery(req)

print(f"Query: {input_query}")
print(f"Answer: {resp.results[0].answer}")
The output of the Python code run, the query and answer, generated by the Riva QA function.
Figure 3. Example query and answer. 

Summary

With Riva QA and the Wikipedia API action, you just created a simple QA application. If there’s an article in Wikipedia that is relevant to your query, you can theoretically find answers. Imagine that you have a database full of articles relevant to your domain, company, industry, or anything of interest. You can create a QA service that can find answers to the questions specific to your field of interest. Obviously, you would need an IR system that would fetch relevant articles from your database, like the Wikipedia API action used in this post. When you have the IR system in your pipeline, Riva can help you find an answer for you. We look forward to the cool applications that you’ll create with Riva. .

Categories
Misc

An Exclusive Invitation: Peek Behind the Omniverse Curtain at the Inaugural Omniverse User Group

Join the first NVIDIA Omniverse User Group, an exclusive event hosted by the lead engineers, designers, and artists of Omniverse on August 12, during the virtual SIGGRAPH conference.

Join the first NVIDIA Omniverse User Group, an exclusive event hosted by the lead engineers, designers, and artists of Omniverse on August 12, during the virtual SIGGRAPH conference.

The Omniverse User Group inaugural event is open to all developers, researchers, creators, students, professionals, and hobbyists of all levels, whether current Omniverse power users or curious explorers. The two-hour event will feature a look into the Omniverse roadmap, and provide sneak peeks of never-before-seen technologies and experiments.

Those who attend the Omniverse User Group will:

  • Hear the vision and future of Omniverse from Rev Lebaredian, VP of Omniverse & Simulation Technology, and Richard Kerris, VP of Omniverse Developer Platform
  • Learn how you can build on and extend the Omniverse ecosystem
  • See the unveiling of “Create With Marbles: Marvelous Machines” contest submissions and winners
  • Attend “Meet the Expert” breakout sessions and speak with Omniverse engineering leads about specific platform applications and features
Image courtesy of Antonio Covelo (@ant_vfx on Twitter), one of the participants of the first Omniverse “Create With Marbles” contest

Omniverse User Group Event Details
When: Thursday, August 12 from 5:00 pm – 6:30 pm PDT/8:00 pm – 9:30 pm EDT
Where: Virtual Event via Zoom

Register now to join this exclusive event.

Mark Your Calendars for NVIDIA at SIGGRAPH

Artists and developers can explore the latest news about NVIDIA Omniverse at SIGGRAPH. Watch the NVIDIA special address on Tuesday, August 10, at 8:00 am PDT to learn about the latest tools and solutions that are driving graphics, AI, and the emergence of shared worlds. The address will be presented by Richard Kerris, Vice President of Omniverse, and Sanja Fidler, Senior Director of AI Research at NVIDIA.

And tune in to the global premiere of “Connecting in the Metaverse: The Making of the GTC Keynote.” The new documentary premieres on August 11th at 9:00 am PDT, highlighting the creative minds and groundbreaking technologies behind the making of the NVIDIA GTC 2021 keynote. See how a small team of artists used NVIDIA Omniverse to blur the line between real and rendered.

Join NVIDIA at SIGGRAPH and learn more about the latest tools and technologies driving real-time graphics, AI-enhanced workflows and virtual collaboration.

For additional support, check out the developer forum and join the Omniverse Discord server to chat with the community.

Categories
Offsites

Mapping Africa’s Buildings with Satellite Imagery

An accurate record of building footprints is important for a range of applications, from population estimation and urban planning to humanitarian response and environmental science. After a disaster, such as a flood or an earthquake, authorities need to estimate how many households have been affected. Ideally there would be up-to-date census information for this, but in practice such records may be out of date or unavailable. Instead, data on the locations and density of buildings can be a valuable alternative source of information.

A good way to collect such data is through satellite imagery, which can map the distribution of buildings across the world, particularly in areas that are isolated or difficult to access. However, detecting buildings with computer vision methods in some environments can be a challenging task. Because satellite imaging involves photographing the earth from several hundred kilometres above the ground, even at high resolution (30–50 cm per pixel), a small building or tent shelter occupies only a few pixels. The task is even more difficult for informal settlements, or rural areas where buildings constructed with natural materials can visually blend into the surroundings. There are also many types of natural and artificial features that can be easily confused with buildings in overhead imagery.

Objects that can confuse computer vision models for building identification (clockwise from top left) pools, rocks, enclosure walls and shipping containers.

In “Continental-Scale Building Detection from High-Resolution Satellite Imagery”, we address these challenges, using new methods for detecting buildings that work in rural and urban settings across different terrains, such as savannah, desert, and forest, as well as informal settlements and refugee facilities. We use this building detection model to create the Open Buildings dataset, a new open-access data resource containing the locations and footprints of 516 million buildings with coverage across most of the African continent. The dataset will support several practical, scientific and humanitarian applications, ranging from disaster response or population mapping to planning services such as new medical facilities or studying human impact on the natural environment.

Model Development
We built a training dataset for the building detection model by manually labelling 1.75 million buildings in 100k images. The figure below shows some examples of how we labelled images in the training data, taking into account confounding characteristics of different areas across the African continent. In rural areas, for example, it was necessary to identify different types of dwelling places and to disambiguate them from natural features, while in urban areas we needed to develop labelling policies for dense and contiguous structures.

(1) Example of a compound containing both dwelling places as well as smaller outbuildings such as grain stores. (2) Example of a round, thatched-roof structure that can be difficult for a model to distinguish from trees, and where it is necessary to use cues from pathways, clearings and shadows to disambiguate. (3) Example of several contiguous buildings for which the boundaries cannot be easily distinguished.

We trained the model to detect buildings in a bottom-up way, first by classifying each pixel as building or non-building, and then grouping these pixels together into individual instances. The detection pipeline was based on the U-Net model, which is commonly used in satellite image analysis. One advantage of U-Net is that it is a relatively compact architecture, and so can be applied to large quantities of imaging data without a heavy compute burden. This is critical, because the final task of applying this to continental-scale satellite imagery means running the model on many billions of image tiles.

Example of segmenting buildings in satellite imagery. Left: Source image; Center: Semantic segmentation, with each pixel assigned a confidence score that it is a building vs. non-building; Right: Instance segmentation, obtained by thresholding and grouping together connected components.

Initial experiments with the basic model had low precision and recall, for example due to the variety of natural and artificial features with building-like appearance. We found a number of methods that improved performance. One was the use of mixup as a regularisation method, where random training images are blended together by taking a weighted average. Though mixup was originally proposed for image classification, we modified it to be used for semantic segmentation. Regularisation is important in general for this building segmentation task, because even with 100k training images, the training data do not capture the full variation of terrain, atmospheric and lighting conditions that the model is presented with at test time, and hence, there is a tendency to overfit. This is mitigated by mixup as well as random augmentation of training images.

Another method that we found to be effective was the use of unsupervised self-training. We prepared a set of 100 million satellite images from across Africa, and filtered these to a subset of 8.7 million images that mostly contained buildings. This dataset was used for self-training using the Noisy Student method, in which the output of the best building detection model from the previous stage is used as a ‘teacher’ to then train a ‘student’ model that makes similar predictions from augmented images. In practice, we found that this reduced false positives and sharpened the detection output. The student model gave higher confidence to buildings and lower confidence to background.

Difference in model output between the student and teacher models for a typical image. In panel (d), red areas are those that the student model finds more likely to be buildings than the teacher model, and blue areas more likely to be background.

One problem that we faced initially was that our model had a tendency to create “blobby” detections, without clearly delineated edges and with a tendency for neighbouring buildings to be merged together. To address this, we applied another idea from the original U-Net paper, which is to use distance weighting to adapt the loss function to emphasise the importance of making correct predictions near boundaries. During training, distance weighting places greater emphasis at the edges by adding weight to the loss — particularly where there are instances that nearly touch. For building detection, this encourages the model to correctly identify the gaps in between buildings, which is important so that many close structures are not merged together. We found that the original U-Net distance weighting formulation was helpful but slow to compute. So, we developed an alternative based on Gaussian convolution of edges, which was both faster and more effective.

Distance weighting schemes to emphasise nearby edges: U-Net (left) and Gaussian convolution of edges (right).

Our technical report has more details on each of these methods.

Results
We evaluated the performance of the model on several different regions across the continent, in different categories: urban, rural, and medium-density. In addition, with the goal of preparing for potential humanitarian applications, we tested the model on regions with displaced persons and refugee settlements. Precision and recall did vary between regions, so achieving consistent performance across the continent is an ongoing challenge.

Precision-recall curves, measured at 0.5 intersection-over-union threshold.

When visually inspecting the detections for low-scoring regions, we noted various causes. In rural areas, label errors were problematic. For example, single buildings within a mostly-empty area can be difficult for labellers to spot. In urban areas, the model had a tendency to split large buildings into separate instances. The model also underperformed in desert terrain, where buildings were hard to distinguish against the background.

We carried out an ablation study to understand which methods contributed most to the final performance, measured in mean average precision (mAP). Distance weighting, mixup and the use of ImageNet pre-training were the biggest factors for the performance of the supervised learning baseline. The ablated models that did not use these methods had a mAP difference of -0.33, -0.12 and -0.07 respectively. Unsupervised self-training gave a further significant boost of +0.06 mAP.

Ablation study of training methods. The first row shows the mAP performance of the best model combined with self-training, and the second row shows the best model with supervised learning only (the baseline). By disabling each training optimization from the baseline in turn, we observe the impact on mAP test performance. Distance weighting has the most significant effect.

Generating the Open Buildings Dataset
To create the final dataset, we applied our best building detection model to satellite imagery across the African continent (8.6 billion image tiles covering 19.4 million km2, 64% of the continent), which resulted in the detection of 516M distinct structures.

Each building’s outline was simplified as a polygon and associated with a Plus Code, which is a geographic identifier made up of numbers and letters, akin to a street address, and useful for identifying buildings in areas that don’t have formal addressing systems. We also include confidence scores and guidance on suggested thresholds to achieve particular precision levels.

The sizes of the structures vary as shown below, tending towards small footprints. The inclusion of small structures is important, for example, to support analyses of informal settlements or refugee facilities.

Distribution of building footprint sizes.

The data is freely available and we look forward to hearing how it is used. In the future, we may add new features and regions, depending on usage and feedback.

Acknowledgements
This work is part of our AI for Social Good efforts and was led by Google Research, Ghana. Thanks to the co-authors of this work: Wojciech Sirko, Sergii Kashubin, Marvin Ritter, Abigail Annkah, Yasser Salah Edine Bouchareb, Yann Dauphin, Daniel Keysers, Maxim Neumann and Moustapha Cisse. We are grateful to Abdoulaye Diack, Sean Askay, Ruth Alcantara and Francisco Moneo for help with coordination. Rob Litzke, Brian Shucker, Yan Mayster and Michelina Pallone provided valuable assistance with geo infrastructure.

Categories
Misc

An AI a Day Keeps Dr.Fill at Play: Matt Ginsberg on Building GPU-Powered Crossword Solver

9 Down, 14 letters: Someone skilled in creating and solving crossword puzzles. This April, the fastest “cruciverbalist” at the ​​American Crossword Puzzle Tournament was Dr.Fill, a crossword puzzle-solving AI program created by Matt Ginsberg. Dr.Fill perfectly solved the championship puzzle in 49 seconds. The first human champion, Tyler Hinman, filled the 15×15 crossword in exactly Read article >

The post An AI a Day Keeps Dr.Fill at Play: Matt Ginsberg on Building GPU-Powered Crossword Solver appeared first on The Official NVIDIA Blog.

Categories
Misc

YOLOR + DeepSORT Object Tracking

YOLOR + DeepSORT Object Tracking submitted by /u/NickFortez06
[visit reddit] [comments]
Categories
Misc

anime recommender model & analysis – I didn’t use TFRS but it seems to be working pretty decent. Any feedback?

anime recommender model & analysis - I didn't use TFRS but it seems to be working pretty decent. Any feedback? submitted by /u/much_bad_gramer
[visit reddit] [comments]
Categories
Misc

Discovering New Features in CUDA 11.4

NVIDIA announces the newest release of the CUDA development environment, CUDA 11.4. This release includes GPU-accelerated libraries, debugging and optimization tools, programming language enhancements, and a runtime library to build and deploy your application on GPUs across the major CPU architectures: x86, Arm, and POWER. CUDA 11.4 is focused on enhancing the programming model and … Continued

NVIDIA announces the newest release of the CUDA development environment, CUDA 11.4. This release includes GPU-accelerated libraries, debugging and optimization tools, programming language enhancements, and a runtime library to build and deploy your application on GPUs across the major CPU architectures: x86, Arm, and POWER.

CUDA 11.4 is focused on enhancing the programming model and performance of your CUDA applications. CUDA continues to push the boundaries of GPU acceleration and lay the foundation for new applications in HPC, graphics, CAE applications, AI and deep learning, automotive, healthcare, and data sciences.

CUDA 11.4 has several important features. This post offers an overview of the key capabilities:

  • CUDA Programming model enhancements:
    • CUDA Graphs
    • Multi-Process Service (MPS)
    • Formalizing Asynchronous Data Movement
  • C++ Language support – CUDA
  • Compiler enhancements
  • CUDA Driver Enhancements

CUDA 11.4 ships with the R470 driver, which is a long-term support branch. GPUDirect RDMA and GPUDirect technology Storage (GDS) are now part of the CUDA Driver and Toolkit. This streamlines the workflow and enables our developers to leverage these technologies without the need for separate installation of additional packages. The driver enables new MIG configurations for the recently launched NVIDIA A30 GPU, which doubles the memory per MIG slice. This results in greater peak performance for various workloads on the A30 GPU, especially for AI inference workloads.

CUDA 11.4 is available to download.

CUDA programming model enhancements

This release introduced key enhancements to improve the performance of CUDA Graphs without requiring any modifications to the application or any other user intervention. It also improves the ease of use of Multi-Process Service (MPS). We formalized the asynchronous programming model in the CUDA Programming Guide.

CUDA Graphs

Reducing graph launch latency is a common request from the developer community, especially in applications that have real-time constraints, such as 5G telecom workloads or AI inference workloads. CUDA 11.4 delivers performance improvements in reducing the CUDA graph launch times. In addition, we also integrated the stream-ordered memory allocation feature that was introduced in CUDA 11.2.

For more information, see CUDA Graphs in the CUDA Toolkit Programming Guide and Getting Started with CUDA Graphs.

Performance improvements

CUDA graphs are ideal for workloads that are executed multiple times, so a key tradeoff in choosing graphs for a workload is amortizing the cost of creating a graph over repeated launches. The higher the number of repetitions or iterations, the larger the performance improvement.

In CUDA 11.4, we made a couple of key changes to CUDA graph internals that further improve the launch performance. CUDA graphs already sidesteps streams to enable lower latency runtime execution. We extended this, to bypass streams even at the launch phase, submitting a graph as a single block of work directly to the hardware. We’ve seen good performance gains from these host improvements, both for single-threaded and multithreaded applications.

Figure 1 shows the relative improvement in launch latency for the re-launch of different graph patterns. There is significant benefit for graphs that have a fork or join pattern.

Figure showing the relative launch latency improvements in CUDA 11.4 for repeat launches of a single graph for different graph patterns against CUDA 11.3.
Figure 1. Launch latency performance improvement for a single graph for repeat launches with CUDA 11.3 baselines.

Multithreaded launch performance is particularly affected by the resource contention that happens when launching multiple graphs in parallel. We’ve optimized the interthread locking to reduce contention, and so multithreaded launch is now significantly more efficient. Figure 2 shows the relative performance benefits of the changes in CUDA 11.4 to ease resource contention and how it scales with the number of threads.

Figure showing the relative launch latency improvements in CUDA 11.4 for multithreaded launch of a straight-line graph for different numbers of threads.
Figure 2. Multithreaded launch latency improvement for a straight line for repeat launches with CUDA 11.3 baselines.

Stream-ordered memory allocator support

The stream-ordered memory allocator enables applications to order memory allocation and deallocation with respect to other work launched into a CUDA stream. This also enables allocation re-use, which can significantly improve application performance. For more information about the feature and capabilities, see Enhancing Memory Allocation with New NVIDIA CUDA 11.2 Features.

In CUDA 11.4, CUDA Graphs now supports stream-ordered memory allocation both through stream capture or in native graph construction through new allocate and free node types, enabling the same efficient, deferred memory reuse logic within graphs.

These node types are collectively referred to as memory nodes.  They can be created in several ways:

  • Using the explicit API
    • Using cudaGraphAddMemAllocNode and cudaGraphAddMemFreeNode, respectively
  • Using stream capture
    • Using cudaMallocAsync/cudaMallocFromPoolAsync and cudaFreeAsync, respectively

In the same way that stream-ordered allocation uses implicit stream ordering and event dependencies to reuse memory, graph-ordered allocation uses the dependency information defined by the edges of the graph to do the same.

Figure showing the new Memalloc and MemFree graph nodes in a simple graph example
Figure 3. Intra-graph memory reuse. When a MemAlloc node is created, it attempts to reuse memory, which was freed by MemFree nodes that it depends upon.

For more information, see Stream Ordered Memory Allocator.

Enhancements to MPS

The Multi-Process Service (MPS) is a binary-compatible client-server runtime implementation of the CUDA API designed to transparently enable co-operative multiprocess CUDA applications.

It consists of a control daemon process, client runtime, and server process. MPS enables better GPU utilization in cases where a single process does not use all the compute and memory-bandwidth capacity. MPS also reduces on-GPU context storage and context switching. For more information, see Multi-Process Service in the GPU Management and Deployment guide.

In this release, we made a couple of key enhancements to improve the ease of use of MPS.

Figure showing the schematic representation of MPS on pre-Volta and post-Volta GPUs
Figure 4. Schematic representation of MPS with reduced on-GPU context storage and context switching to improve ease-of-use.

Programmatic configuration of SM partitions

There are certain use cases that share the following characteristics:

  • They consist of kernels that have little to no interaction, which enables concurrent execution.
  • The ratio of SMs required by these workloads may change and requires flexibility in allocating the right number of SMs.

The MPS active thread percentage setting enables you to limit the execution to a portion of the SMs. Before CUDA 11.4, this was a fixed value that was set equally for all clients within the process. In CUDA 11.4, this has been extended to offer a mechanism to partition the SMs at a per-client level through a programmatic interface. This enables you to create contexts that have different SM partitions within the same application process.

A new resource type called CU_EXEC_AFFINITY_TYPE_SM_COUNT enables you to specify a minimum number N that the context requires. The system guarantees that at least this many SMs are assigned, although more may be reserved. CUDA 11.4 also introduces a related affinity API cuCtxGetExecAffinity, which queries the exact amount of a resource (such as the SM count) allocated for a context. For more information, see the cuCtxGetExecAffinity section in the API documentation.

Error reporting

To improve the error reporting and ease of diagnosing the root cause of MPS issues, we introduced new and detailed driver and runtime error codes. These error codes provide more specificity regarding the type of error. They supplement the common MPS error codes with additional information to help you trace down the cause of the failures. Use these error codes in your applications with the error messages in the server log, as part of the root cause analysis.

New error codes:

 CUDA_ERROR_MPS_CONNECTION_FAILED
 CUDA_ERROR_MPS_SERVER_NOT_READY
 CUDA_ERROR_MPS_RPC_FAILURE
 CUDA_ERROR_MPS_MAX_CLIENTS_REACHED
 CUDA_ERROR_MPS_MAX_CONNECTIONS_REACHED 

Formalizing asynchronous data movement

In support of the asynchronous memory transfer operations, enabled by NVIDIA A100 GPU microarchitecture, in CUDA 11.4, we formalized and defined the asynchronous SIMT programming model. The asynchronous programming model defines the behavior and the APIs for C++ 20 barriers and cuda::memcpy_async on the GPU.

For more information about how you can use the asynchronous APIs to overlap memory operations from global memory, with computations in the streaming multiprocessors (SMs), see Asynchronous SIMT Programming Model.

Other enhancements

In addition to the key capabilities listed earlier, there are a few enhancements in CUDA 11.4 geared towards improving the mulit-thread submission throughput and extending the CUDA forward compatibility support to NVIDIA RTX GPUs.

Multithread submission throughput

In 11.4, we reduced the serialization of the CUDA API overheads between CPU threads. These changes are enabled by default. However, to assist with the triage of possible issues because of the underlying changes, we provide an environment variable, CUDA_REDUCE_API_SERIALIZATION, to turn off these changes. This was one of the underlying changes discussed earlier that contributed to the performance improvements for CUDA graphs.

CUDA forward compatibility

To enable use cases where you want to update your CUDA toolkit but stay on your current driver version, for example to reduce the risk or the overhead of additional validation needed to move to a new driver, CUDA offers the CUDA forward compatibility path. This was introduced in CUDA 10.0 but was initially limited to data-center GPUs. CUDA 11.4 eases those restrictions and you can now take advantage of the forward compatibility path for NVIDIA RTX GPUs as well.

Figure showing flowchart diagram to help developers decide which CUDA driver upgrade path best meets their needs
Figure 5. Forward compatibility upgrade path between pre-CUDA 11.0 releases and current CUDA 11.x drivers.

C++ language support for CUDA

Here are some key enhancements included with C++ language support in CUDA 11.4.

  • Major releases:
    • NVIDIA C++ Standard Library (libcu++) 1.5.0 was released with CUDA 11.4.
    • Thrust 1.12.0 has the new thrust::universal_vector API that enables you to use the CUDA unified memory with Thrust.
  • Bug fix release: The CUDA 11.4 toolkit release includes CUB 1.12.0.
  • New asynchronous thrust::async:exclusive_scan and inclusive_scan algorithms have been added, and the synchronous versions of these were updated to use cub::DeviceScan directly.

CUDA compiler enhancements

CUDA 11.4 NVCC C++ compiler has JIT LTO support in preview, offers more L1 and L2 cache control, and exposes a C++ symbol demangling static library along with NVIDIA Nsight debugger support for alloca.

JIT link time optimization

JIT link-time optimization (LTO) is a preview feature and is available only on CUDA Toolkit 11.4, not on embedded platforms. This feature enables LTO to be performed at runtime. Use NVRTC to generate NVVM IR, and then use the cuLink driver APIs to link the NVVM IR and do LTO. 

The following code example shows how runtime JIT LTO can be used in your program.
Generate NVVM IR using nvrtcCompileProgram with the -dlto option and retrieve the generated NVVM IR using the newly introduced nvrtcGetNVVM.  Existing cuLink APIs are augmented to take newly introduced JIT LTO options to accept NVVM IR as input and to perform JIT LTO. Pass the CU_JIT_LTO option to cuLinkCreate API to instantiate the linker and then use CU_JIT_INPUT_NVVM as option to cuLinkAddFile or cuLinkAddData API for further linking of NVVM IR.

 nvrtcProgram prog1, prog2;
 CUlinkState linkState;
 int err;
 void* cubin;
 size_t cubinSize;
 char *nvvmIR1, *nvvmIR2;
 
 
 NVRTC_SAFE_CALL(
      nvrtcCompileProgram(&prog1, ...);
 NVRTC_SAFE_CALL(
      nvrtcCompileProgram(&prog2, ...);
 
 const char* opts = (“--gpu-architecture=compute_80”, “--dlto”);
 
 nvrtcGetNVVM(prog1, &nvvmIR1);
 nvrtcGetNVVM(prog1, &nvvmIR2);
  
  
 options[0] = CU_JIT_LTO;
 values[0] = (void*)&walltime;
 ...
 cuLinkCreate(..., options, values, &linkState);
 err = cuLinkAddData(linkState, CU_JIT_INPUT_NVVM,
                     (void*)nvvmIR1, strlen(nvvmIR1) + 1, ...);
 ...
 err = cuLinkAddData(linkState, CU_JIT_INPUT_NVVM,
                     (void*)nvvmIR2, strlen(nvvmIR2) + 1, ...);
 ...
 cuLinkComplete(linkState, &cubin, &cubinSize);
 ... 

Libcu++flt library support

The CUDA SDK now ships with libcu++filt, a static library that converts compiler-mangled C++  symbols into user-readable names. The following API, found in the nv_decode.h header file, is the entry point to the library:

char* __cu_demangle(const char* id, char *output_buffer, size_t *length, int *status) 

The following C++ example code shows usage:

 #include 
 #include "/usr/local/cuda-14.0/bin/nv_decode.h"
  
 using namespace std;
 int main(int argc, char **argv)
 {
   const char* mangled_name = "_ZN6Scope15Func1Enez";
   int status = 1;
   char* w = __cu_demangle(mangled_name,0,0,&status);
   if(status != 0)
     cout



This code example outputs as follows:

Demangling Succeeded: Scope1::Func1(__int128, long double, ...)
Demangling Succeeded: Scope1::Func1(__int128, long double, ...)

For more information, see Library Availability in the CUDA Binary Utilities documentation.

Configuring cache behavior in PTX

PTX ISA 7.4 gives you more control over caching behavior of both L1 and L2 caches. The following capabilities are introduced in this PTX ISA version:

  • Enhanced data prefetching: The new .level::prefetch_size qualifier can be used to prefetch additional data along with memory load or store operations. This enables exploiting the spatial locality of data.
  • Eviction priority control: PTX ISA 7.4 introduces four cache eviction priorities. These eviction priorities can be specified with the .level::eviction_priority qualifier on memory load or store operations (applicable to the L1 cache) and on the prefetch instruction (applicable to the L2 cache).
    • evict_normal (default)
    • evict_last (useful when the data should be kept in the cache longer)
    • evict_first (useful for streaming data)
    • no_allocate (avoid data from being cached at all)
  • Enhanced L2 cache control: This comes in two flavors:
    • Cache control on specific addresses: The new discard instruction enables discarding data from cache without writing it back to memory. It should be only used when the data is no longer required. The new applypriority instruction sets the eviction priority of specific data to evict_normal. This is [articularly useful in downgrading the eviction priority from evict_last when the data no longer needs to be persistent in cache.
    • Cache-hints on memory operations: The new createpolicy instruction enables creating a cache policy descriptor that encodes one or more cache eviction priorities for different data regions. Several memory operations including load, store, asynchronous copy, atom, red, and so on can accept cache policy descriptor as an operand when the .level::cache_hint qualifier is used.

These extensions are treated as performance hints only. The caching behavior specified using these extensions is not guaranteed by the caching system. For more information about usage, see the PTX ISA specification.

Other compiler enhancements in CUDA 11.4 include support for a new host compiler: ICC 2021. The diagnostics emitted by the CUDA frontend compiler are now ANSI colored and Nsight debugger can now correctly unwind CUDA applications with alloca calls, in the Call Stack view.

Nsight Developer Tools

New versions are now available for NVIDIA Nsight Visual Studio Code Edition (VSCE) and Nsight Compute 2021.2, adding enhancements to the developer experience for CUDA programming.

NVIDIA Nsight VSCE is an application development environment for heterogeneous platforms bringing CUDA development for GPUs into Microsoft Visual Studio Code. NVIDIA Nsight VSCE enables you to build and debug GPU kernels and native CPU code in the same session as well as inspect the state of the GPU and memory.

It includes IntelliSense code highlighting for CUDA applications and an integrated GPU debugging experience from the IDE with support for stepping through code, setting breakpoint, and inspecting memory states and system information in CUDA kernels. Now it’s easy to develop and debug CUDA applications directly from Visual Studio Code.

Nsight Compute 2021.2 adds new features that help detect more performance issues and make it easier to understand and fix them. The new register dependency visualization (Figure 6) helps identify long dependency chains and inefficient register usage that can limit performance. This release also adds a frequently requested feature to enable you to view the side-by-side assembly and correlated source code for CUDA kernels in the source view, without needing to collect a profile. This standalone source viewer feature enables you to open .cubin files directly from disk in the GUI to see the code correlation.

Example screen shot of Nsight Compute tool helping developers visualize registry dependency to help improve code performance
Figure 6. Nsight Compute Register Dependency Visualization capability helps identify long dependency chains and inefficient register usage to help improve performance.

Several features, including highlighted focus metrics, report cross-links, increased rule visibility, and documentation references all add to the built-in profile and optimization guided analysis that Nsight Compute provides to help you understand and fix performance bottlenecks.

This release also includes support for OptiX 7 resource tracking, a new Python interface for reading report data, and improvements to management of baseline reports, font settings, and CLI filters.

For overall updates, see NVIDIA Developer Tools Overview. Download the tools to your code.

For more information about the CUDA 11 generation toolkit capabilities and introductions, see CUDA 11 Features Revealed and follow all CUDA posts.

Acknowledgements

Thanks to the following key contributors: Stephen Jones, Arthy Sundaram, Fred Oh, and Sally Stevenson.

Categories
Misc

How Was NVIDIA’s 2021 GTC Keynote Made? Step Inside Our Kitchen Aug. 11 to Find Out

Ever see a virtual kitchen materialize in real-time? If you caught NVIDIA CEO Jensen Huang’s keynote for our March 2021 GPU Technology Conference you’re no doubt wondering about more than a few of the presentation’s magic tricks. With the premiere of “Connecting in the Metaverse: The Making of the GTC Keynote,” Wednesday, Aug. 11, at Read article >

The post How Was NVIDIA’s 2021 GTC Keynote Made? Step Inside Our Kitchen Aug. 11 to Find Out appeared first on The Official NVIDIA Blog.