Categories
Misc

NVIDIA Launches Morpheus Early Access Program to Enable Advanced Cybersecurity Solution Development

NVIDIA Morpheus gives security teams complete visibility into security threats with unmatched AI processing and real-time monitoring to protect every server and screen every packet in the data center.

NVIDIA is opening early access to its Morpheus AI development framework for cybersecurity applications. Selected developers have access to Morpheus starting today with more developers joining the program over the next few months.

Just announced at NVIDIA GTC in April 2021, NVIDIA Morpheus gives security teams complete visibility into security threats with unmatched AI processing and real-time monitoring to protect every server and screen every packet in the data center. Security applications built on Morpheus help them respond to anomalies and update policies immediately as threats are identified, by building on NVIDIA deep learning and data science tools including RAPIDS, CLX, Streamz, Triton Inference Server, and TensorRT. Data analysis runs on NVIDIA-Certified servers built on the NVIDIA EGX platform or in qualified cloud instances that support NVIDIA GPUs, while traffic collection and telemetry can run on a variety of servers or switches plus the NVIDIA BlueField-2 data processing unit (DPU).

Figure 1. NVIDIA Morpheus leverages NVIDIA data science frameworks and the NVIDIA EGX platform for data analysis, and the NVIDIA DPU for telemetry and pervasive traffic scanning.

Developers in the Morpheus early access program have immediate access to components through the NGC catalog and can load the components into an Amazon Web Services Elastic Compute (AWS EC2) G4 instance — featuring an NVIDIA T4 or A100 GPU — to begin immediate development of cybersecurity applications and solutions. Early access will soon support the use of Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift on NVIDIA-Certified servers built on NVIDIA EGX for on-premises development/deployment, and RHEL on NVIDIA BlueField DPUs for enhanced data collection and traffic screening that can protect every server. Support for running Morpheus on Ubuntu is expected soon afterwards, followed by additional OS options.

Developers accepted to early access are being notified this week and NVIDIA plans to expand the early access program quickly to include more security ISV partners, end users, academics, and other security professionals who wish to develop scalable, adaptive, AI-powered cybersecurity solutions.

If you are a customer, partner or researcher interested in joining the Morpheus early access program, please apply here.

Additional Resources:

Categories
Misc

GFN Thursday Heats Up with ‘LEGO Builder’s Journey’ and ‘Phantom Abyss’ Game Launches, Plus First Look at Kena: Bridge of Spirits

It’s getting hot in here, so get your game on this GFN Thursday with 13 new games joining the GeForce NOW library, including LEGO Builder’s Journey, Phantom Abyss and the Dual Universe beta. Plus, get a sneak peek at Kena: Bridge of Spirits, coming to the cloud later this year. Break the Rules Build up Read article >

The post GFN Thursday Heats Up with ‘LEGO Builder’s Journey’ and ‘Phantom Abyss’ Game Launches, Plus First Look at Kena: Bridge of Spirits appeared first on The Official NVIDIA Blog.

Categories
Misc

More Than Meets the AI: How GANs Research Is Reshaping Video Conferencing

Roll out of bed, fire up the laptop, turn on the webcam — and look picture-perfect in every video call, with the help of AI developed by NVIDIA researchers. Vid2Vid Cameo, one of the deep learning models behind the NVIDIA Maxine SDK for video conferencing, uses generative adversarial networks (known as GANs) to synthesize realistic Read article >

The post More Than Meets the AI: How GANs Research Is Reshaping Video Conferencing appeared first on The Official NVIDIA Blog.

Categories
Misc

Fast-Track Production AI with Pretrained Models and Transfer Learning Toolkit 3.0

NVIDIA announced new pre-trained models and general availability of Transfer Learning Toolkit (TLT) 3.0, a core component of NVIDIA’s Train, Adapt and Optimize (TAO) platform guided workflow for creating AI.

Today, NVIDIA announced new pretrained models and general availability of Transfer Learning Toolkit (TLT) 3.0, a core component of NVIDIA’s Train, Adapt, and Optimize (TAO) platform guided workflow for creating AI. The new release includes a variety of highly accurate and performant pretrained models in computer vision and conversational AI, as well as a set of powerful productivity features that boost AI development by up to 10x. 

As enterprises race to bring AI-enabled solutions to market, your competitiveness relies on access to the best development tools. The development journey to deploy custom, high-accuracy, and performant AI models in production can be treacherous for many engineering and research teams attempting to train with open-source models for AI product creation. NVIDIA offers high-quality, pretrained models and TLT to help reduce costs with large-scale data collection and labeling. It also eliminates the burden of training AI/ML models from scratch. New entrants to the computer vision and speech-enabled service market can now deploy production-class AI without a massive AI development team. 

Highlights of the new release include:

  • A pose-estimation model that supports real-time inference on edge with 9x faster inference performance than the OpenPose model. 
  • PeopleSemSegNet, a semantic segmentation network for people detection.
  • A variety of computer vision pretrained models in various industry use cases, such as license plate detection and recognition, heart rate monitoring, emotion recognition, facial landmarks, and more.
  • CitriNet, a new speech-recognition model that is trained on various proprietary domain-specific and open-source datasets.
  • A new Megatron Uncased model for Question Answering, plus many other pretrained models that support speech-to-text, named-entity recognition, punctuation, and text classification.
  • Training support on AWS, GCP, and Azure.
  • Out-of-the-box deployment on NVIDIA Triton and DeepStream SDK for vision AI, and NVIDIA Jarvis for conversational AI.

Get Started Fast

  • Download Transfer Learning Toolkit and access to developer resources: Get started
  • Download models from NGC: Computer vision | Conversational AI 
  • Check out the latest developer tutorial: Training and Optimizing a 2D Pose-Estimation Model with the NVIDIA Transfer Learning Toolkit. Part 1 | Part 2 

Integration with Data-Generation and Labeling Tools for Faster and More Accurate AI

TLT 3.0 is also now integrated with platforms from several leading partners who provide large, diverse, and high-quality labeled data—enabling faster end-to-end AI/ML workflows. You can now use these partners’ services to generate and annotate data, seamlessly integrate with TLT for model training and optimization, and deploy the model using DeepStream SDK or Jarvis to create reliable applications in computer vision and conversational AI. 

Check out more partner blog post and tutorials about synthetic data and data annotation with TLT:

Learn more about NVIDIA pretrained models and Transfer Learning Toolkit > >

Categories
Misc

New on NGC: PyTorch Lightning Container Speeds Up Deep Learning Research

With PyTorch Lightning, you can scale your models to multiple GPUs and leverage state-of-the-art training features such as 16-bit precision, early stopping, logging, pruning and quantization, while enabling faster iteration and reproducibility.

Deep learning research requires working at scale. Training on massive data sets or multilayered deep networks is computationally intensive and can take an impractically long time as deep learning models are bound by memory. The key here is to compose the deep learning models in a structured way so that they are decoupled from the engineering and data, enabling researchers to conduct fast research.

PyTorch Lightning, developed by Grid.AI, is now available as a container on the NGC catalog, NVIDIA’s hub of GPU-optimized AI and HPC software. Pytorch Lightning was designed to remove the roadblocks in deep learning research and allows researchers to focus on science. Lightning is more of a style guide than a framework, enabling you to structure and organize your code while providing utilities for common functions. With PyTorch Lightning, you can scale your models to multiple GPUs and leverage state-of-the-art training features such as 16-bit precision, early stopping, logging, pruning and quantization, while enabling faster iteration and reproducibility.

Figure 1. PyTorch Lightning Philosophy

A Lightning model is composed of the following:

  • A LightningModule that encapsulates the model code
  • A Lightning DataModule that encapsulates transforms, dataset, and DataLoaders
  • A Lightning trainer that automates the training routine with 70+ flags to make advanced features trivial
  • Callbacks for users to customize Lightning using hooks

The Lightning objects are implemented as hooks that can be overridden, making every single aspect of deep learning training highly configurable. With Lightning, you have full control over every detail:

  • Change how the backward step is done.
  • Change how 16-bit is initialized.
  • Add your own way of doing distributed training.
  • Add learning rate schedulers.
  • Use multiple optimizers.
  • Change the frequency of optimizer updates.

Get started today with NGC PyTorch Lightning Docker Container from the NGC catalog.

Categories
Misc

Achieve up to 75% Performance Improvement for Communication Intensive HPC Applications with NVTAGS

NVTAGS automates intelligent GPU assignment by profiling HPC applications and launching them with a custom GPU assignment tailored to an application and system to minimize communication costs.

Many GPU-accelerated HPC applications spend a substantial portion of their time in non-uniform, GPU-to-GPU communications. Additionally, in many HPC systems, different GPU pairs share communication links with varying bandwidth and latency. As a result, GPU assignment can substantially impact time to solution. Furthermore, on multi-node / multi-socket systems, communication performance can degrade when GPUs communicate with CPUs and NICs outside their system affinity. Because resource selection is system dependent, it is challenging to select resources such that communication costs are minimized.

NVIDIA Topology-Aware GPU Selection (NVTAGS) abstracts away the complexity of efficient resource selection. NVTAGS automates intelligent GPU assignment by profiling HPC applications and launching them with a custom GPU assignment tailored to an application and system to minimize communication costs. NVTAGS ensures that, regardless of a system’s communication topology, MPI processes communicate with the CPUs and NICs or HCAs within their own affinity. 

NVTAGS improves performance of Chroma, MILC, and LAMMPS from 2% to 75% on one to 16 nodes.

Key NVTAGS Features:

  • Automated topology detection along with CPU and NIC/HCA binding, independent of the system and HPC application
  • Support for single- and multi-node, PCIe, and NVIDIA NVLink with NVIDIA Pascal, Volta, and Ampere architecture GPUs
  • Automatic caching of efficient GPU selection for future simulations
  • Straightforward integration with Slurm and Singularity

Download NVTAGS 1.0.0 today. 

Additional Resources:

NVTAGS Product Page
Blog: Overcoming Communication Congestion for HPC Applications with NVIDIA NVTAGS

Categories
Offsites

Improving Genomic Discovery with Machine Learning

Each person’s genome, which collectively encodes the biochemical machinery they are born with, is composed of over 3 billion letters of DNA. However, only a small subset of the genome (~4-5 million positions) varies between two people. Nonetheless, each person’s unique genome interacts with the environment they experience to determine the majority of their health outcomes. A key method of understanding the relationship between genetic variants and traits is a genome-wide association study (GWAS), in which each genetic variant present in a cohort is individually examined for correlation with the trait of interest. GWAS results can be used to identify and prioritize potential therapeutic targets by identifying genes that are strongly associated with a disease of interest, and can also be used to build a polygenic risk score (PRS) to predict disease predisposition based on the combined influence of variants present in an individual. However, while accurate measurement of traits in an individual (called phenotyping) is essential to GWAS, it often requires painstaking expert curation and/or subjective judgment calls.

In “Large-scale machine learning-based phenotyping significantly improves genomic discovery for optic nerve head morphology”, we demonstrate how using machine learning (ML) models to classify medical imaging data can be used to improve GWAS. We describe how models can be trained for phenotypes to generate trait predictions and how these predictions are used to identify novel genetic associations. We then show that the novel associations discovered improve PRS accuracy and, using glaucoma as an example, that the improvements for anatomical eye traits relate to human disease. We have released the model training code and detailed documentation for its use on our Genomics Research GitHub repository.

Identifying genetic variants associated with eye anatomical traits
Previous work has demonstrated that ML models can identify eye diseases, skin diseases, and abnormal mammogram results with accuracy approaching or exceeding state-of-the-art methods by domain experts. Because identifying disease is a subset of phenotyping, we reasoned that ML models could be broadly used to improve the speed and quality of phenotyping for GWAS.

To test this, we chose a model that uses a fundus image of the eye to accurately predict whether a patient should be referred for assessment for glaucoma. This model uses the fundus images to predict the diameters of the optic disc (the region where the optic nerve connects to the retina) and the optic cup (a whitish region in the center of the optic disc). The ratio of the diameters of these two anatomical features (called the vertical cup-to-disc ratio, or VCDR) correlates strongly with glaucoma risk.

A representative retinal fundus image showing the vertical cup-to-disc ratio, which is an important diagnostic measurement for glaucoma.

We applied this model to predict VCDR in all fundus images from individuals in the UK Biobank, which is the world’s largest dataset available to researchers worldwide for health-related research in the public interest, containing extensive phenotyping and genetic data for ~500,000 pseudonymized (the UK Biobank’s standard for de-identification) individuals. We then performed GWAS in this dataset to identify genetic variants that are associated with the model-based predictions of VCDR.

Applying a VCDR prediction model trained on clinical data to generate predicted values for VCDR to enable discovery of genetic associations for the VCDR trait.

The ML-based GWAS identified 156 distinct genomic regions associated with VCDR. We compared these results to a VCDR GWAS conducted by another group on the same UK Biobank data, Craig et al. 2020, where experts had painstakingly labeled all images for VCDR. The ML-based GWAS replicates 62 of the 65 associations found in Craig et al., which indicates that the model accurately predicts VCDR in the UK Biobank images. Additionally, the ML-based GWAS discovered 93 novel associations.

Number of statistically significant GWAS associations discovered by exhaustive expert labeling approach (Craig et al., left), and by our ML-based approach (right), with shared associations in the middle.

The ML-based GWAS improves polygenic model predictions
To validate that the novel associations discovered in the ML-based GWAS are biologically relevant, we developed independent PRSes using the Craig et al. and ML-based GWAS results, and tested their ability to predict human-expert-labeled VCDR in a subset of UK Biobank as well as a fully independent cohort (EPIC-Norfolk). The PRS developed from the ML-based GWAS showed greater predictive ability than the PRS built from the expert labeling approach in both datasets, providing strong evidence that the novel associations discovered by the ML-based method influence VCDR biology, and suggesting that the improved phenotyping accuracy (i.e., more accurate VCDR measurement) of the model translates into a more powerful GWAS.

The correlation between a polygenic risk score (PRS) for VCDR generated from the ML-based approach and the exhaustive expert labeling approach (Craig et al.). In these plots, higher values on the y-axis indicate a greater correlation and therefore greater prediction from only the genetic data. [* — p ≤ 0.05; *** — p ≤ 0.001]

As a second validation, because we know that VCDR is strongly correlated with glaucoma, we also investigated whether the ML-based PRS was correlated with individuals who had either self-reported that they had glaucoma or had medical procedure codes suggestive of glaucoma or glaucoma treatment. We found that the PRS for VCDR determined using our model predictions were also predictive of the probability that an individual had indications of glaucoma. Individuals with a PRS 2.5 or more standard deviations higher than the mean were more than 3 times as likely to have glaucoma in this cohort. We also observed that the VCDR PRS from ML-based phenotypes was more predictive of glaucoma than the VCDR PRS produced from the extensive manual phenotyping.

The odds ratio of glaucoma (self-report or ICD code) stratified by the PRS for VCDR determined using the ML-based phenotypes (in standard deviations from the mean). In this plot, the y-axis shows the probability that the individual has glaucoma relative to the baseline rate (represented by the dashed line). The x-axis shows standard deviations from the mean for the PRS. Data are visualized as a standard box plot, which illustrates values for the mean (the orange line), first and third quartiles, and minimum and maximum.

Conclusion
We have shown that ML models can be used to quickly phenotype large cohorts for GWAS, and that these models can increase statistical power in such studies. Although these examples were shown for eye traits predicted from retinal imaging, we look forward to exploring how this concept could generally apply to other diseases and data types.

Acknowledgments
We would like to especially thank co-author Dr. Anthony Khawaja of Moorfields Eye Hospital for contributing his extensive medical expertise. We also recognize the efforts of Professor Jamie Craig and colleagues for their exhaustive labeling of UK Biobank images, which allowed us to make comparisons with our method. Several authors of that work, as well as Professor Stuart MacGregor and collaborators in Australia and at Max Kelsen have independently replicated these findings, and we value these scientific contributions as well.

Categories
Misc

Run RAPIDS on Microsoft Windows 10 Using WSL 2—The Windows Subsystem for Linux

A tutorial to run your favorite Linux software, including NVIDIA CUDA, on Windows RAPIDS is now more accessible to Windows users! This post walks you through installing RAPIDS on Windows Subsystem for Linux (WSL). WSL is a Windows 10 feature that enables users to run native Linux command-line tools directly on Windows. Using this feature … Continued

This post was originally published on the RAPIDS AI Blog.

A tutorial to run your favorite Linux software, including NVIDIA CUDA, on Windows

RAPIDS is now more accessible to Windows users! This post walks you through installing RAPIDS on Windows Subsystem for Linux (WSL). WSL is a Windows 10 feature that enables users to run native Linux command-line tools directly on Windows. Using this feature does not require a dual boot environment, taking away complexity and hopefully saving you time. You’ll need access to an NVIDIA GPU with NVIDIA Pascal architecture or newer. Let’s get started right away.

Getting Started

To install RAPIDS, you’ll need to do the following:

  1. Install the latest builds from the Microsoft Insider Program.
  2. Install the NVIDIA preview driver for WSL 2.
  3. Install WSL 2.
  4. Install RAPIDS.

Steps 1–3 can be completed by following the NVIDIA CUDA on WSL guide. However, there are some gotchas. This article will walk through each section and point out what to look out for. We recommend opening a tab for the guide alongside this post to make sure that you don’t miss anything. Before you start, be aware that all the steps in the guide must be carried out in order. It’s particularly important that you install a fresh version of WSL 2 only after installing the new build and driver. Also note, CUDA toolkit will be installed along with RAPIDS in step 4. Therefore, stop following the CUDA on WSL guide after you reach the Setting up CUDA Toolkit section.

Installing the latest builds from the Microsoft Insider program

For your program to run correctly, you need to be using Windows Build version 20145 or higher. When installing the builds, some things to note are:

  • Start off by navigating to your Windows menu. Select Settings > Update and Security > Windows Update. Make sure that you don’t have any pending Windows updates. If you do, click the update button to ensure you’re starting out without any.
  • Dev Channel (previously Fast ring): Fast ring is mentioned in the guide as the channel you should download your build from. The name of this channel is now the Dev Channel. Windows call the process of updating and installing the latest builds ‘flighting.’ During this process, you must select the DEV Channel when choosing which updates to receive.
  • Downloading and updating requires a restart and can take up to 90mins. Feel free to grab a coffee while you wait ;).
  • After you’ve restarted your computer, check your build version by running winver via the Windows Run command. It can be a little tricky to identify the right number. Here’s what you should look for after a successful installation (BUILD 20145 or higher):
Image of updated Windows 10 OS Build. The build in the image is OS Build 21296.
Figure 1: Build version is now OS Build 21296 which is sufficient to run WSL2.

Once you’ve confirmed your build, move onto step 2.

Installing NVIDIA drivers

Next, you’ll need to install an NVIDIA Driver. Keep the following in mind:

  • Select the driver based on the type of NVIDIA GPU in your system. To verify your GPU type look for the NVIDIA Control Panel in your Start menu. The name should appear there. See the CUDA on Windows Subsystem for Linux (WSL) public preview for more information. 
  • Once the download is complete install the driver using the executable. We strongly recommend choosing the default location for saving it.
  • A check to ensure the driver install was successful is to run the command nvidia-smi in PowerShell. It should output a table with information about your GPU and the driver. You’ll notice the driver version is the same as the one you downloaded.
Image of NVIDIA-SMI table displayed in Windows Powershell. NVIDIA Driver version 465.21 has been correctly installed.
Figure 2: NVIDIA Driver has correctly been installed, version 465.21.

(Your table might be much shorter and not show any GPU processes. As long as you can see a table and no visible errors, your install should have been successful!) If your driver is successfully installed, let’s jump to step 3. If nothing appears, check if you’ve missed any of the steps and if your build version is correct.

Installing WSL 2

Next, you’ll install WSL 2 with a Linux distribution of your choice using the docs here. Make sure that the distribution you choose is supported by RAPIDS. You can confirm this here. The rest of this post describes the installation of WSL 2 with Ubuntu 18.04. These steps should work similarly with other supported distributions.

There are two ways you can install your RAPIDS supporting Linux distribution with WSL 2 on Windows 10. The instructions listed in the Windows guide can seem overwhelming so we’ve distilled it down to the most important parts here:

Using the command line

  • Open your command line and ensure you’re in the Admin role.
  • Find out which Linux distributions are available and support WSL by typing in the command wsl --list -online.
  • To install the distribution, use the command wsl --install -d .
  • For Ubuntu 18.04 this command translated to wsl --install -d Ubuntu-18.04 (be aware of the capital letter U.) This should download and install your Linux distribution.
  • Your selected distribution should either immediately open or appear in your Windows Start menu.
  • If this is not true for you, double-check that your Linux distribution and WSL install was successful by running wsl.exe -list. If no distribution appears, navigate to “Programs” in your Control Panel. Confirm that the “Windows Hypervisor Platform” and “Windows Subsystem for Linux” boxes are checked. It should look something like the image below. Once confirmed, reboot your computer and try running the install again (possibly twice.) Ideally, the WSL terminal should pop up right after the installation.
Image of Windows Features list. Windows Hypervisor Platform and Windows Subsystem for Linux boxes are checked successfully.
Figure 3: In case your WSL terminal install doesn’t work right away, make sure the folders checked preceding are checked on your system as well.
  • When opening your WSL terminal for the first time, you will be prompted to set up a default(non-root) user. Ensure that you do not skip this step, as you will need to be the root user to install other packages.
  • Once you’ve set the default user, proceed to reboot your machine. When you return, you’ll be all set for step 4.

Through the Microsoft Store

  • If you already know which distribution you would like to use, you can download and install it directly from the Microsoft Store on your machine.
  • You’ll need to set the default user and do a reboot in this case as well.

Once you’ve completed this step, you’re ready to install the CUDA Toolkit and almost done!

Install RAPIDS

  • If you don’t have it already, start by installing and activating Miniconda in your WSL terminal. We’ll be using the conda command to install the packages we need in this step.
  • You can install RAPIDS with a single conda command. Just type the following line in and you’re all set.
Type this command into your terminal to install RAPIDS.

To test your installation, start-up the RAPIDS virtual environment. You can do this by:

  • Typing out conda info --envs, which will let you know the name of the installed RAPIDS environment.
  • Note: cuDF is supported only on Linux and with Python versions 3.7 and later.
  • Finally, import any RAPIDS library or start a Jupyter notebook.

Hopefully, your installation was successful. RAPIDS is open-source, so if you managed to get this far and would like to contribute, take another look at the contributing guide of any of our libraries or join the RAPIDS Slack channel to find out more.

Categories
Misc

Sherd Alert: GPU-Accelerated Deep Learning Sorts Pottery Fragments as Well as Expert Archeologists

A pair of researchers at Northern Arizona University used GPU-based deep-learning algorithms to categorize sherds — tiny fragments of ancient pottery — as well as, or better than, four expert archaeologists. The technique, outlined in a paper published in the June issue of The Journal of Archaeological Science by Leszek Pawlowicz and Christian Downum focused Read article >

The post Sherd Alert: GPU-Accelerated Deep Learning Sorts Pottery Fragments as Well as Expert Archeologists appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Studio Goes 3D: Real-Time Ray Tracing and AI Accelerate Adobe’s New Substance 3D Collection of Design Applications

The NVIDIA Studio ecosystem continues to deliver time-saving features and visual improvements to top creative applications. Today, Adobe announced a significant update to their 3D lineup, with new and improved tools available in the Adobe Substance 3D Collection: new versions of Substance 3D Painter, Designer and Sampler, as well as the new application Substance 3D Read article >

The post NVIDIA Studio Goes 3D: Real-Time Ray Tracing and AI Accelerate Adobe’s New Substance 3D Collection of Design Applications appeared first on The Official NVIDIA Blog.