Categories
Misc

Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing

As businesses extend the power of AI and data science to every developer, IT needs to deliver seamless, scalable access to supercomputing with cloud-like simplicity and security. At GTC21, we introduced the latest NVIDIA DGX SuperPOD, which gives business, IT and their users a platform for securing and scaling AI across the enterprise, with the Read article >

The post Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing appeared first on The Official NVIDIA Blog.

Categories
Misc

XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk

Applying for a home mortgage can resemble a part-time job. But whether consumers are seeking out a home loan, car loan or credit card, there’s an incredible amount of work going on behind the scenes in a bank’s decision — especially if it has to say no. To comply with an alphabet soup of financial Read article >

The post XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA, BMW Blend Reality, Virtual Worlds to Demonstrate Factory of the Future

The factories of the future will have a soul — a “digital twin” that blends man and machine in stunning new ways. In a demo blending reality and virtual reality, robotics and AI, to manage one of BMW’s automotive factories, NVIDIA CEO Jensen Huang Monday rolled out a stunning vision of the future of manufacturing. Read article >

The post NVIDIA, BMW Blend Reality, Virtual Worlds to Demonstrate Factory of the Future appeared first on The Official NVIDIA Blog.

Categories
Misc

GTC Showcases New Era of Design and Collaboration

Breakthroughs in 3D model visualization, such as real-time ray–traced rendering and immersive virtual reality, are making architecture and design workflows faster, better and safer.   At GTC this week, NVIDIA announced the newest advances for the AEC industry with the latest NVIDIA Ampere architecture-based enterprise desktop RTX GPUs, along with an expanded range of mobile laptop GPUs.   AEC professionals will also want to learn more about NVIDIA Omniverse Enterprise, an open platform Read article >

The post GTC Showcases New Era of Design and Collaboration appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries

NVIDIA technology has been behind some of the world’s most stunning virtual reality experiences. Each new generation of GPUs has raised the bar for VR environments, producing interactive experiences with photorealistic details to bring new levels of productivity, collaboration and fun. And with each GTC, we’ve introduced new technologies and software development kits that help Read article >

The post NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA cuQuantum SDK Introduces Quantum Circuit Simulation Acceleration

Developers can use cuQuantum to speed up quantum circuit simulations based on state vector, density matrix, and tensor network methods by orders of magnitude.

Quantum computing has the potential to offer giant leaps in computational capabilities. Until it becomes a reality, scientists, developers, and researchers are simulating quantum circuits on classical computers. 

NVIDIA cuQuantum is an SDK of optimized libraries and tools for accelerating quantum computing workflows. Developers can use cuQuantum to speed up quantum circuit simulations based on state vector, density matrix, and tensor network methods by orders of magnitude. 

The research community – including  academia, laboratories, and private industry – are all using simulators to help design and verify algorithms to run on quantum computers. These simulators capture the properties of superposition and entanglement and are built on quantum circuit simulation frameworks including Qiskit, Cirq, ProjectQ, Q#, etc. 

We showcase accelerated quantum circuit simulation results based on industry estimations, extrapolations, and benchmarks on real-world computers like ORNL’s Summit, and NVIDIA’s Selene, and reference collaborations with numerous industry partners.  

“Using the Cotengra/Quimb packages, NVIDIA’s new cuQUANTUM SDK, and the Selene supercomputer, we’ve generated a sample of the Sycamore quantum circuit at depth=20 in record time (less than 10 minutes). This sets the benchmark for quantum circuit simulation performance and will help advance the field of quantum computing by improving our ability to verify the behavior of quantum circuits.”

Johnnie Gray, Research Scientist, Caltech
Garnet Chan, Bren Professor of Chemistry, Caltech

Learn more about cuQuantum, our latest benchmark results, and apply for early interest today here.  

Categories
Misc

Help me find a loss function.

submitted by /u/FunnyForWrongReason
[visit reddit] [comments]

Categories
Misc

How to replace the code snippet with TensorFlow 1.14.1 feature_columns = [tf. contrib. layers. real_value_column ( "" , dimension = 98)] in TensorFlow 2.4.1?

How to replace the code snippet with TensorFlow 1.14.1 feature_columns = [tf. contrib. layers. real_value_column ( “” , dimension = 98)] in TensorFlow 2.4.1?

submitted by /u/Myprok
[visit reddit] [comments]

Categories
Misc

New NVIDIA OptiX Enhancements That Improve Your Ray Tracing Applications

OptiX 7.3 brings temporal denoising and improvements to OptiX Curves primitives and new features to the OptiX Demand Loading library

OptiX 7.3 brings temporal denoising and improvements to OptiX Curves primitives and new features to the OptiX Demand Loading library

NVIDIA Optix Ray Tracing Engine is a scalable and seamless framework that offers optimal ray tracing performance on GPUs. In this spring’s update to the OptiX SDK, developers will be able to leverage temporal denoising, faster curve intersectors, and fully asynchronous demand loading library.

Smoother Denoising for Moving Sequences

The OptiX denoiser comes with a brand new denoising mode called temporal denoising, which is engineered to denoise multi-frame animation sequences without getting any of the low-frequency denoiser artifacts in the animation that you get when you denoise animated frames separately. The results are impressively smooth, and this update will be a boon to users of the OptiX denoiser who want to remove noise from moving sequences. This has been one of our most requested features and now it’s here. This release of the OptiX denoiser comes with yet another performance increase as well, and the recent AOV (layered) denoising and brand new temporal denoising are fast enough on the current generation of NVIDIA GPUs to be used in real time for interactive applications, with plenty of room to spare for rendering.

left: Denoising each frame separately; right: Temporal Denoising

Improved Curves For Better Ray Tracing Performance 

OptiX 7.3 comes with a round of updates to the curve primitive intersectors. Our new cubic and quadratic curve intersectors are 20% to 80% faster with this release, and even the already very fast linear intersector (up to 2x faster than cubic) has improved in performance a bit as well. All the intersectors now support backface culling by default, which makes it easier for developers to support shadows, transparency, and other lighting effects that depend on reflected and transmitted secondary rays from hair and curves. The best kept secret so far about OptiX curves is how fast they are with OptiX motion blur on the new generation of Ampere GPUs. With Ampere’s new hardware acceleration of motion blur, we’re seeing performance increases on motion blurred hair up to 3x faster than motion blurred hair on Turing cards.

Image courtesy Koke Nunez. Rendered in Otoy Octane.

Faster Object Loading Without Extra GPU Resources

The demand loading library, included with the OptiX 7.3 download, has also received updates. It is now fully asynchronous, with sparse texture tiles loaded in the background by multiple CPU threads in parallel with OptiX kernels executing on the GPU. Support has also been added for multiple streams, which allows for the hiding of texture I/O latency and an easier implementation of bucketed rendering approach. This increased parallelism, in conjunction with additional performance updates present in the OptiX 7.3 SDK should offer a compelling reward for adding demand loading in your projects. A new sample has been added and the existing associated samples have been updated to give you a great place to start. 

Image courtesy  Daniel Bates. Rendered in Chaos V-Ray 
Categories
Misc

NVIDIA Omniverse Kaolin App Now Available for 3D Deep Learning Researchers

3D deep learning researchers can enter NVIDIA Omniverse and simplify their workflows with the Omniverse Kaolin app, now available in open beta.

3D deep learning researchers can enter NVIDIA Omniverse and simplify their workflows with the Omniverse Kaolin app, now available in open beta.

The Omniverse platform provides researchers, developers, and engineers with the ability to virtually collaborate and work between different software applications. Omniverse Kaolin is an interactive application that acts as a companion to the NVIDIA Kaolin library, helping 3D deep learning researchers accelerate their process.

The Kaolin app leverages the Omniverse platform, USD format and RTX rendering to provide interactive tools that allow visualizing 3D outputs of any deep learning model as it is training, inspecting 3D datasets to find inconsistencies and gain intuition, and rendering large synthetic datasets from collections of 3D data.

Omniverse Kaolin enables users to reduce the time needed to develop AI research for a wide range of 3D applications.

TRAINING VISUALIZER

The Omniverse Kaolin Training Visualizer extension allows interactive visualization of 3D checkpoints exported using Kaolin Library python API. By scrubbing through iterations, researchers can see the progression of training over time, and visualize multiple textures and labels that may be predicted for each 3D model. 

The 3D checkpoints can include meshes, point clouds and voxel grids in any number of categories, with multiple textures and labels supported for meshes. The extension also allows creating and saving custom layouts for visualizing results consistently across experiments. 

DATASET VISUALIZER

The performance of machine learning models can depend heavily on the properties of the training data. The Omniverse Kaolin Dataset Visualizer extension allows sampling and visualizing batches from 3D datasets to gain intuition and identify problems that can hinder learning.

DATA GENERATOR

Many machine learning techniques rely on images and ground truth labels for training, and synthetic data is a powerful tool to support such applications. The Omniverse Kaolin Data Generator extension uses NVIDIA RTX ray and path tracing to render massive image datasets from a collection of 3D data, while also exporting custom ground truth labels from a variety of sensors.

Download NVIDIA Omniverse and install Omniverse Kaolin today.