With GeForce NOW, over 5 million gamers are playing their favorite games in the cloud on PC, Mac, Chromebook, NVIDIA SHIELD TV, Android and iOS devices. With over 800 instantly available games and 80+ free-to-play games, there’s something for everyone. And there are multiple ways to build your library. We’ll review how to sync your Read article >
As enterprises modernize their data centers to power AI-driven applications and data science, NVIDIA and VMware are making it easier than ever to develop and deploy a multitude of different AI workloads in the modern hybrid cloud. The companies have teamed up to optimize the just-announced update to vSphere — VMware vSphere 7 Update 2 Read article >
Runs on VMware vSphere; Optimized, Certified and Supported by NVIDIA; Hundreds of Thousands of Customers in World’s Largest Industries Can Now Adopt NVIDIA AI Enterprise at Scale SANTA CLARA, …
Artists, this is your chance to push past creative limits — and win great prizes — while exploring NVIDIA Omniverse through a new design contest. Called “Create with Marbles,” the contest is set in Omniverse, the groundbreaking platform for virtual collaboration, creation and simulation, and based on the Marbles RTX demo that first previewed at Read article >
Open Neural Network Exchange (ONNX) is a powerful and open format built to represent machine learning models. The final outcome of training any machine learning or deep learning algorithm is a model file that represents the mapping of input data to output predictions in an efficient manner.
Driver assistance technology is an incredibly active research domain – from supervised assistance functions all the way to fully autonomous driving. The best way to showcase the capabilities of novel AV approaches is to demonstrate them in a real car, but there are significant challenges to this type of deployment. Getting to the point where a new approach for multi-agent prediction, camera-based localization […]
Driver assistance technology is an incredibly active research domain – from supervised assistance functions all the way to fully autonomous driving. The best way to showcase the capabilities of novel AV approaches is to demonstrate them in a real car, but there are significant challenges to this type of deployment.
Getting to the point where a new approach for multi-agent prediction, camera-based localization or night-time obstacle detection runs in a car requires efforts on multiple levels:
A vehicle needs to be retrofitted with sensors, AI compute hardware, data storage, vehicle IO interface and potentially even a drive-by-wire interface.
Middleware needs to be implemented to orchestrate the individual functional components.
Fundamental software components for vehicle IO, sensor interfacing, calibration, recording need to be implemented.
The NVIDIA DRIVE AGX autonomous vehicle compute platform is designed to substantially simplify these efforts, enabling researchers to focus on what’s most important.
Research in Motion
Prof. Daniel Watzenig and the Autonomous Racing Graz team (a collaboration between Graz University of Technology and Virtual Vehicle Research) are pushing autonomous driving to the limit with driverless racing. Powered by NVIDIA DRIVE AGX, the team’s vehicle managed to secure the third place in the 2020 Roborace Season Alpha and was the highest-finishing academic team.
“Autonomous racing comes with very high requirements against software and hardware but also against weight and space. As researchers, we need to focus on trying out new approaches and iterate quickly – the NVIDIA DRIVE AGX platform is a perfect fit to these needs and has proven to be a key factor of our team’s success.” Watzenig said.
The NVIDIA DRIVE AGX Developer Kit provides the hardware, software and sample applications needed for the development of autonomous vehicles. The platform is built on production auto-grade silicon, features an open software framework, and has a large ecosystem of supported auto grade sensors to choose from. The developer kit comes with unrivaled compute performance in a compact form factor, reaching up to 320 TOPs (INT8).
An ADAS and AV development platform must offer a software environment that supports established research tools.The DRIVE AGX Developer Kit supports a tailored Linux-derivate that provides a familiar environment to researchers. The possibility of running numerous popular Linux libraries makes it easy to migrate existing code.
The comprehensive NVIDIA DRIVE Software gives researchers a head start with a rich software suite that provides low-level hardware interfacing and middleware out-of-the-box. DRIVE OS provides Hypervisor, CUDA, deep learning inference with TensorRT and camera interfacing. DriveWorks comprises tools for calibration, sensor and vehicle interfacing and recording and many additional tools and APIs. Finally, samples showcase typical AV modules that can be used as a reference.
Another important research requirement is that the development platform should be compact and durable. While it is possible to install desktop computers and additional hardware in cars to develop ADAS and AV functions, these setups typically require a large amount of space and add additional points of failure, especially since these components will likely not be auto-grade.
Getting Started with DRIVE AGX
The DRIVE AGX Developer Kit is available through the NVIDIA DRIVE Developer Program for DRIVE AGX. Please contact your NVIDIA representative (or contact us) to ensure necessary agreements have been signed before requesting to join the program. Users may only join with a corporate or university email address.
The DRIVE AGX Developer Kit comes with plenty resources to jump-start development:
The “Developing with DRIVE AGX” section of the product page explains the setup, provides a quick-start guide and an intuitive set up video.
Extensive documentation explains technical details and explain samples you can use as reference for your own applications.
Like a traveler who overpacks a suitcase with a closet’s worth of clothes, most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus. But an individual cell pulls out only the subsection of genetic apparel that it needs to function, with each Read article >
Facebook AI researchers this week announced SEER, a self-supervised model that surpasses the best self-supervised systems.
Facebook AI researchers this week announced SEER, a self-supervised model that surpasses the best self-supervised systems, and also outperforms supervised models on tasks including image classification, object detection, and segmentation.
Combining RegNet architectures with the SwAV online clustering approach, SEER is a billion-parameter model pretrained on a billion random images.
Instead of relying on labeled datasets, self-supervised learning models for computer vision generate data labels by finding relationships between images with no annotations or metadata. Such models are considered key to developing AI with “common sense,” says Yann LeCun, Facebook AI’s chief scientist.
After using a billion public Instagram images for pretraining, SEER achieved 84.2 percent accuracy on the popular ImageNet dataset, beating state-of-the-art self-supervised systems. The researchers also trained SEER using just 10 percent of images in the popular ImageNet dataset, still achieving nearly 78 percent accuracy. Even when trained with just 1 percent of ImageNet, the model was over 60 percent accurate.
SEER was trained on 512 NVIDIA V100 Tensor Core GPUs with 32GB of RAM for 30 days, said Facebook software engineer Priya Goyal. The researchers used mixed precision from the NVIDIA Apex library and gradient checkpointing tools from PyTorch to reduce memory usage and increase training speed of the model.
The researchers chose RegNet architecture for its ability to scale to billions or trillions of parameters while accommodating runtime and memory constraints. The SwAV algorithm helped achieve record performance with 6x less training time.
Image credit: Facebook
“Eliminating the need for human annotations and metadata enables the computer vision community to work with larger and more diverse data sets, learn from random public images, and potentially mitigate some of the biases that come into play with data curation,” wrote Facebook AI in a blog post. “Self-supervised learning can also help specialize models in domains where we have limited images or metadata, like medical imaging.”
Facebook also open-sourced VISSL, the PyTorch-based general-purpose library for self-supervised learning that was used to develop SEER.