Categories
Misc

Music to the Gears: NVIDIA’s Clément Farabet on Orchestrating AI Training for Autonomous Vehicles

Autonomous vehicles are one of the most complex AI challenges of our time. For AVs to operate safely in the real world, the networks running within them must come together as an intricate symphony, which requires intensive training, testing and validation on massive amounts of data. Clément Farabet, vice president of AI infrastructure at NVIDIA, Read article >

The post Music to the Gears: NVIDIA’s Clément Farabet on Orchestrating AI Training for Autonomous Vehicles appeared first on NVIDIA Blog.

Categories
Misc

How MONAI Fuels Open Research for Medical AI Workflows

MONAI is fueling open innovation for medical imaging with tools to accelerate image annotation, train state-of-the-art deep learning models, and create AI applications that drive research innovation.

It’s never been more important to put powerful AI tools in the hands of the world’s leading medical researchers. That’s why NVIDIA has invested in building a collaborative open-source foundation with MONAI, the Medical Open Network for AI. MONAI is fueling open innovation for medical imaging by providing tools that accelerate image annotation, train state-of-the-art deep learning models, and create AI applications that help drive research breakthroughs.

Developing domain-specific AI can be challenging, as a lack of best practices and open blueprints creates various impediments from research and development to clinical evaluation and deployment. Researchers needed a common foundation to accelerate the pace of medical AI research innovation.

The core principle behind creating Project MONAI is to unite doctors with data scientists to unlock the power of medical data. MONAI is a collaborative open-source initiative built by academic and industry leaders to establish and standardize the best practices for deep learning in healthcare imaging. Created by the imaging research community, for the imaging research community, MONAI is accelerating innovation in deep learning models and deployable applications for medical AI workflows.

Helping guide MONAI’s vision and mission, an Advisory Board and nine working groups, led by thought leaders throughout the medical research community. These focused working groups allow leaders in those fields to concentrate their efforts and bring effective contributions to the community. The working groups are open for anyone to attend.

MONAI is an open-source PyTorch-based framework for building, training, deploying, and optimizing AI workflows in healthcare. It focuses on providing high-quality, user-friendly software that facilitates reproducibility and easy integration. With these tenants researchers can share their results and build upon each other’s work, fostering collaboration among academic and industry researchers.

The suite of libraries, tools, and SDKs within MONAI provide a robust and common foundation that covers the end-to-end medical AI life cycle, from annotation through deployment.

Medical imaging annotation and segmentation

MONAI Label is an intelligent image labeling and learning tool that uses AI assistance to reduce the time and effort of annotating new datasets. Utilizing user interactions, MONAI Label trains an AI model for a specific task, continuously learns, and updates the model as it receives additional annotated images. 

MONAI Label provides multiple sample applications that include state-of-the-art interactive segmentation approaches like DeepGrow and DeepEdit. These sample applications are ready to use out of the box to quickly get started on annotating with minimal effort. Developers can also build their own MONAI Label applications with creative algorithms.

Client integrations help clinicians, radiologists, and pathologists interact with MONAI Label applications in their typical workflow. These clinical interactions are not dormant, as experts can correct annotations and immediately trigger training loops to adapt the model to input on the fly.

MONAI Label has integrations for 3D Slicer, OHIF for Radiology and QuPath, Digital Slide Archive for Pathology. Developers can also integrate MONAI Label into their custom viewer by using server and client APIs, which are well abstracted and documented for seamless integration. 

MONAI Label bridges the researchers world with clinical collaborators and can be integrated into any viewer, including 3D slicer and OHIF
Figure 1. MONAI Label architecture

Domain-specific algorithms and research pipelines

MONAI Core is the flagship library of Project MONAI and provides domain-specific capabilities for training AI models for healthcare imaging. These capabilities include medical-specific image transforms, state-of-the-art transformer-based 3D Segmentation algorithms like UNETR, and an AutoML framework named DiNTS.

With these foundational components, users can integrate MONAI’s domain-specialized components into their existing PyTorch programs. Users can also interface with MONAI at the workflow level for ease of robust training and research experiments. A rich set of functional examples demonstrates the capabilities and integration with other open-source packages like PyTorch Lightning, PyTorch Ignite, and NVIDIA FLARE. Finally, state-of-the-art reproducible research pipelines are included for self-supervised learning, AutoML, vision transformers for 3D, and 3D segmentation.

Screenshots of research pipelines on MONAI, such as Brain Tumor Segmentation, DeepAtlas, Vision Transformers and Multi Modal AI for Disease Classification.
Figure 2. State-of-the-art research pipelines available on MONAI Core

Deploying medical AI to clinical production

87% of data science projects never make it into production. Several steps are involved in crossing the chasm between a model and a deployable app. These include selecting the correct DICOM datasets, preprocessing input images, performing inference, exporting the results, visualizing the results, and further applying optimizations.

MONAI Deploy aims to become the de-facto standard for developing packaging, testing, deploying, and running medical AI applications in clinical production. MONAI Deploy creates a set of intermediate steps where researchers and physicians can build confidence in the techniques and approaches used with AI. This makes an iterative workflow, until the AI inference infrastructure is ready for clinical environments.

MONAI Deploy App SDK enables developers to take an AI model and turn them into AI applications. Available on GitHub, MONAI Deploy is also building open reference implementations of an inference orchestration engine, informatics gateway, and a workflow manager to help drive clinical integration.

 To drive innovation to the clinic, MONAI is building open reference implementations of inference orchestration engine, informatics gateway, and a workflow manager.
Figure 3. MONAI Deploy’s modular and open reference deployment framework

Advancing medical AI

The world’s leading research centers, including King’s College London, NIH National Cancer Institute, NHS Guy’s and St. Thomas’ Trust, Stanford University, Mass General Brigham, and Mayo Clinic are building and publishing using MONAI. Integration partners like AWS, Google Cloud, and Microsoft are all standing up MONAI on their platforms. To date, MONAI has grossed over 425,000 downloads and has a community of over 190 contributors who have published over 140 research papers.

The groundbreaking research using MONAI is fueled by the growth of its open community of contributors. Together, these researchers and innovators are collaborating on AI best practices in a platform that spans the full medical AI project lifecycle. From training to deployment, MONAI is bringing the healthcare community together to unlock the power of medical data and accelerate AI into clinical impact.

To learn more about MONAI and get started today, visit MONAI.io. A library of tutorials and recordings of MONAI bootcamps are also available for MONAI users on the MONAI YouTube channel.

Categories
Misc

Sensational Surrealism Astonishes This Week ‘In the NVIDIA Studio’

3D phenom FESQ joins us ‘In the NVIDIA Studio’ this week to share his sensational and surreal animation ‘Double/Sided’ as well as an inside look into his creative workflow. ‘Double/Sided’ is deeply personal to FESQ, who said the piece “translates really well to a certain period of my life when I was juggling both a programmer career and an artist career.”

The post Sensational Surrealism Astonishes This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Categories
Misc

Meet the Omnivore: Developer Builds Bots With NVIDIA Omniverse and Isaac Sim

While still in grad school, Antonio Serrano-Muñoz has helped author papers spanning planetary gravities, AI-powered diagnosis of rheumatoid arthritis and robots that precisely track millimetric-sized walkers, like ants.

The post Meet the Omnivore: Developer Builds Bots With NVIDIA Omniverse and Isaac Sim appeared first on NVIDIA Blog.

Categories
Misc

Turbocharging Multi-Cloud Security and Application Delivery with VirtIO Offloading

F5 Accelerates Security and App DeliveryBy accelerating Virtio-net in hardware, poor network performance can be avoided while maintaining transparent software implementation, including full support for VM live migration. F5 Accelerates Security and App Delivery

The incredible increase of traffic within data centers along with increased adoption of virtualization is placing strains on the traditional data centers.

Customarily, virtual machines rely on software interfaces such as VirtIO to connect with the hypervisor. Although VirtIO is significantly more flexible compared to SR-IOV, it can use up to 50% more compute power in the host, thus reducing the servers’ overall efficiency.

Similarly, the adoption of software-defined data centers is on the rise. Both virtualization and software-defined workloads are extremely CPU-intensive. This creates inefficiencies that reduce overall performance system-wide. Furthermore, infrastructure security is potentially compromised as the application domain and networking domain are not separated.

F5 and NVIDIA recently presented on how to solve these challenges [NEED SESSION LINK] at NVIDIA GTC. F5 discussed accelerating its BIG-IP Virtual Edition (VE) virtualized appliance portfolio by offloading VirtIO to the NVIDIA BlueField-2 data processing unit (DPU) and ConnectX-6 Dx SmartNIC. In the session, they discuss how the DPU provides optimal acceleration and offload due to its onboard networking ASIC and Arm processor cores, freeing CPU cores to focus on application workloads.

Offloading to the DPU also provides domain isolation to secure resources more tightly. Support for VirtIO also enables dynamic composability, creating a software-defined, hardware-accelerated solution that significantly decreases reliance on the CPU while maintaining the flexibility that VirtIO offers.

Virtual switching acceleration

DPUs accelerating Virtio in hardware avoiding poor network performance from software implementations.
Figure 1. Offloading VirtIO moves virtual datapath out of software and into the hardware of the SmartNIC or DPU where it can be accelerated

Virtual switching was born as a consequence of server virtualization. Hypervisors need the ability to enable transparent traffic switching between VMs and with the outside world.

One of the most commonly used virtual switching software solutions is Open vSwitch (OVS). NVIDIA Accelerated Switching and Packet Processing (ASAP2) technology accelerates virtual switching to improve performance in software-defined networking environments.

ASAP2 supports using vDPA to offload virtual switching (the OVS data plane) from the control plane. This permits flow rules to be programmed into the eSwitch within the network adapter or DPU and allows the use of standard APIs and common libraries such as DPDK to provide significantly higher OVS performance without the associated CPU load.

ASAP2 also supports SR-IOV for hardware acceleration of the data plane. The combination of the two capabilities provides a software-defined and hardware-accelerated solution that resolves performance issues associated within virtual SDN vSwitching solutions.

Accelerated networking

Earlier this year, NVIDIA released NVIDIA DOCA, a framework that simplifies application development for BlueField DPUs. DOCA makes it easier to program and manage the BlueField DPU. Applications developed using DOCA for BlueField will also run without changes on future versions, ensuring forward compatibility.

DOCA consists of industry-standard APIs, libraries, and drivers. One of these drivers is the DOCA VirtIO-net, which provides virtio interface acceleration. When using BlueField, the virtio interface is run on the DPU hardware. This reduces the CPU’s involvement and accelerates VirtIO’s performance while enabling features such as live migrations.

Bar chart of performance testing done with VirtIO offloading shows a dramatic increase in performance and improvements in processing time and packets processed
Figure 2. Performance advantages available with VirtIO offloading [VirtIO INCORRECTLY CAPITALIZED IN CHART TITLE]

BIG-IP VE results

During the joint GTC session, F5 demonstrated the advantages of hardware acceleration versus running without hardware acceleration. The demonstration showed BIG-IP VE performing SSL termination for NGINX. The TSUNG traffic generator is used to send 512K byte packets through multiple instances of BIG-IP VE.

With VirtIO running on the host, the max throughput reached only 5 Gbps and took 187 seconds to complete, with only 80% of all packets processed.

The same scenario using hardware acceleration resulted in 16 Gbps of throughput in only 62 seconds and 100% of the packets were processed.

Summary

Increasing network speeds, virtualization, and software-defined networking are adding strain on data center systems and creating a need for efficiency improvements.

VirtIO is a well-established I/O virtualization interface but has a software-only framework. SR-IOV technology was developed precisely to support high performance and efficient offload and acceleration of network functionality, but it requires a specific driver in each VM. By accelerating VirtIO-net in hardware, you can avoid poor network performance while maintaining transparent software implementation, including full support for VM live migration.

The demonstration with F5 Networks showed a 320% improvement in throughput, a 66% reduction in processing time, and 100% of packets were processed. This is evidence that the evolving way forward is through hardware vDPA that combines the out-of-the-box availability of VirtIO drivers with the performance gains of DPU hardware acceleration.

This session was presented simulive at NVIDIA GTC and can be replayed. For more information about the F5-NVIDIA joint solution that demonstrates the benefits of reduced CPU utilization while achieving high performance using VirtIO, see GTC session titled, Multi-cloud Security and Appllicaiton Delivery with VirtIO.