Categories
Misc

examples of some of the most impressive complex tensorflow applications you’ve seen?

submitted by /u/Bulbasaur2015
[visit reddit] [comments]

Categories
Misc

Possible to retrain onnx model?

I am trying to use a model built in Pytorch in a project that I am working on and will need to retrain it. Would it be possible for me to somehow retrain the model in TF if I have imported an onnx version of it? It would save alot of time compared to the alternative, porting the whole thing over manually from Pytorch.

submitted by /u/hoagie_friend
[visit reddit] [comments]

Categories
Misc

Trying to conceptualize what a tensor "is"

I am trying to understand what a tensor is and how its properties are useful in machine learning.

I’m looking for feedback on whether I’m on the right track in this journey.

I want to answer why and how a tensor works for classifying, really, anything.

A tensor classifies by its very nature by defining a space. The more dimensions you add to that space the more complex a space you can describe.

Is the act of applying transform rules to a tensor what allows it to describe all the other variations of the concept the tensor is trying to describe? Or is it just transforming the tensor into one of the mirror representations (more on this later)?

A tensor is like a feature. Hotdog is something you can classify using a tensor. The “crispness” of that classification increases as you increase the ranks. The more ranks you add to the tensor the better you can represent what a hotdog is.

Not enough ranks and a nose feature is easily confused as a carrot. Maybe a feature described with a tensor of lower ranks will find it impossible to gain the resolution required to distinguish between a hotdog and a carrot at all.

Is there such a thing as too many ranks? Or does it just become harder and harder to train? Do more ranks increase the possibility of overfitting? I don’t know – but I’d love someone to reason though it.

The permutations of values in the dimensions the object represents must have to have an unimaginable number of mirror representations that would also represent a hotdog That’s why trained models with different values can give the same outcome. Could this be what a transform is doing?

There are even more slightly skewed representations of a hotdog that exist as the values at each dimension are wiggled. But those skews exist, for example, in the visual data because adding a rank of “what is it used for” to the data makes those visual confusions impossible. You would never confuse a hotdog for a flash light if the value of edible was added to the dataset being trained.

One or more of the tensors dimensions would be 99.99999% successful because they would all conclude to use that as the best datapoint.

But visual data doesn’t have such obvious binary data – I mean it’s binary, but it visual data can’t take the district property of:

“1” – edible and “0” – inedible.

Instead the binary nature of the decision exists in a more complex representation between the dimensional values (lol, actually also all the mirrors) – eg you can represent it as:

“0” – edible and “1” – inedible.

Training on data is the process of bumping the values at random until they fall into one of these permutations that’s able answer the question with the desired classification.

Overtraining is when you bump it to only recognize the data itself as the “space”, being the data is the defining binary decision it encodes for – not what the data is trying to embody.

submitted by /u/theregalbeagler
[visit reddit] [comments]

Categories
Misc

Data Science – Top Resources from GTC 21

Accelerated data science can dramatically boost the performance of end-to-end analytics workflows, speeding up value generation while reducing cost. Learn how companies like Spotify and Walmart use NVIDIA-accelerated data science.

Data analytics workflows have traditionally been slow and cumbersome, relying on CPU compute for data preparation, training, and deployment. Accelerated data science can dramatically boost the performance of end-to-end analytics workflows, speeding up value generation while reducing cost. Learn how companies like Spotify and Walmart use NVIDIA-accelerated data science.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform here

On-Demand Sessions

GPU-Accelerated Model Evaluation: How we took our offline evaluation process from hours to minutes with RAPIDS
Speakers: Joseph Cauteruccio, Machine Learning Engineer, Spotify; Marc Romeyn, Machine Learning Engineer, Spotify;

Learn how Spotify utilized cuDF and Dask-CUDF to build an interactive model evaluation system that drastically reduced the time it took to evaluate our recommender systems in an offline setting. As a result, model evaluations that previously took hours to complete as CPU workloads now run in minutes, allowing us to increase our overall iteration speed and thus build better models.

Accelerated ETL, Training and Inference of Recommender Systems on the GPU with Merlin, HugeCTR, NVTabular, and Triton
Speaker: Even Oldridge, Senior Manager, Recommender Systems Framework Team, NVIDIA

In this talk, we’ll share the Merlin framework, consisting of NVTabular for ETL, HugeCTR for training, and Triton for inference serving. Merlin accelerates recommender systems on GPU, speeding up common ETL tasks, training of models, and inference serving by ~10x over commonly used methods. Beyond providing better performance, these libraries are also designed to be easy to use and integrate with existing recommendation pipelines.

How Walmart improves computationally intensive business processes with NVIDIA GPU Computing
Speakers: Richard Ulrich, Senior Director, Walmart; John  Bowman, Director, Data Science, Walmart

Over the last several years, Walmart has been developing and implementing a wide range of applications that require GPU computing to be computationally feasible at Walmart scale.   We will present CPU vs. GPU performance comparisons on a number of real-world problems from different areas of the business and we highlight, not just the performance gains from GPU computing, but also what capabilities GPU computing has enabled that would simply not be possible on CPU-only architectures. 

How Cloudera Data Platform uses a single pane of glass to deploy GPU accelerated applications s across hybrid and multi-clouds
Speakers: Karthikeyan Rajendran, Product Manager, NVIDIA; Scott McClellan, General Manager of Data Science, NVIDIA

Learn how Cloudera Data Platform uses a single pane of glass to deploy GPU-accelerated applications across hybrid and multi-clouds.

GPU-Accelerated, High-Performance Machine Learning Pipeline
Speaker: Lei Zhang, Senior Machine Learning Engineer, Adobe

The Adobe team is currently working with NVIDIA to build an unprecedented GPU-based, high-performance machine learning pipeline.

Click here to view all of the other Data Science sessions and demos on NVIDIA On-Demand.

Categories
Misc

Getting Started on Jetson – Top Resources from GTC 21

Hands-on learning is key for anyone new to AI and robotics. Priced for everyone, the Jetson Nano Developer Kit is the best way to get started learning how to create AI projects.

The NVIDIA Jetson Nano Developer Kit is a small AI computer for makers, learners, and developers. Jetson Nano is also the perfect tool to start learning about AI and robotics in real-world settings, with ready-to-try projects and the support of an active and passionate developer community. Begin developing your first AI projects today.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform here.

On-Demand Sessions

Jetson 101: Learning Edge AI Fundamentals

Speaker: Dustin Franklin, Developer Evangelist for Jetson, NVIDIA

Discover how to get started creating your own AI-powered projects on Jetson Nano with deep learning and computer vision. 

Optimizing for Edge AI on Jetson

Speaker: John Welsh, Developer Technology Engineer of Autonomous Machines

Learn about workflows for optimizing deep learning models for inference at the edge with NVIDIA Jetson.

Demos

Getting started with Jetson Nano 2GB Developer Kit

Jetson Community Projects

Explore and learn from Jetson projects created by us and our community. These have been created for Jetson developer kits. Scroll down to see projects with code, videos and more.

Categories
Misc

Graphics – Top Resources from GTC 21

Engineers, product developers and designers worldwide attended GTC to learn how the latest NVIDIA technologies are accelerating real-time, interactive rendering and simulation workflows.

Engineers, product developers and designers worldwide attended GTC to learn how the latest NVIDIA technologies are accelerating real-time, interactive rendering and simulation workflows.

We showcased the latest NVIDIA-powered AI and real-time ray tracing tools that have made creativity faster and easier for artists and designers. We also discussed with industry luminaries about their vision for the future of AI and real-time ray tracing, and how Autodesk and Adobe have integrated the technology into their most popular applications. All of these GTC sessions are now available through NVIDIA On-Demand.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform.

On-Demand Sessions

A Shared Vision for the Future of AI: Fireside Chat with NVIDIA founder and CEO Jensen Huang, and Adobe CTO Abhay Parasnis
Learn from the CTO of Adobe, Abhay Parasnis, and NVIDIA founder and CEO Jensen Huang about the powerful impact AI has on the world’s most inspiring creators and where it will take us next.

Digital Human for Digital Twins
Speakers: NVIDIA, wrnch, Reallusion

Watch this talk about the importance of simulating human action and movement in a digital environment, and how advancements in AI are paving the way for digital-twin humans.

The Future of GPU Ray Tracing
Speakers: Adobe/Solid Angle, OTOY, Blur Studio, Epic Games, Pixar, Redshift Rendering Technologies, Isotropix, Chaos Group

This panel discussion features leaders of GPU accelerated ray tracing sharing their thoughts around the technology and impact to creative professionals and workflows.

New SDK Releases

NVIDIA OptiX 7.3 Available Now
Download the latest version of OptiX that offers improved ray tracing performance while reducing GPU resources.

Accelerate Special Effects with NanoVDB
NanoVDB adds real time rendering GPU support for OpenVDB. Download the latest version now, which includes memory savings while generating complex special effects.

All the professional graphics and simulation sessions at GTC are now available for free on NVIDIA On-Demand.

Categories
Misc

Video Processing and Streaming – Top Resources from GTC 21

This year at GTC we announced the release of NVIDIA Maxine, a GPU-accelerated SDK for building innovative virtual collaboration and content creation applications such as video conferencing and live streaming.

AI has been instrumental in providing exciting features and improving quality and operational efficiency for conferencing, media delivery and content creation. 

This year at GTC we announced the release of NVIDIA Maxine, a GPU-accelerated SDK for building innovative virtual collaboration and content creation applications such as video conferencing and live streaming. 

Check out some of the most popular sessions, demos, and videos from GTC showcasing Maxine’s latest advancements:

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform here

SDK

NVIDIA Maxine Now Available
With Maxine’s AI SDKs—Video Effects, Audio Effects, and Augmented Reality (AR)—developers can now create real-time, video-based experiences easily deployed to PCs, data centers, and the cloud. Maxine can also leverage NVIDIA Jarvis to access conversational AI capabilities such as transcription, translation, and virtual assistants.

On-Demand

How NVIDIA’s Maxine Changed the Way We Communicate
Hear from Avaya’s Mike Kuch, Sr. Director of Solutions Marketing, and Paul Relf, Sr. Director of Product Management about Avaya Spaces built on CPaaS. Avaya is making capabilities associated with meetings available in contact centers. With AI noise elimination, agents and customers can hear each other in noisy environments. We’re combining components to realize the art of the possible for unique experiences by Avaya with NVIDIA AI.

Real-time AI for Video-Conferencing with Maxine
Learn from Andrew Rabinovich, Co-Founder and CTO, Julian Green, Co-Founder and CEO, and Tarrence van As, Co-Founder and Principal Engineer, from Headroom about applying the latest AI research on real-time video and audio streams for a more-human video-conferencing application. Explore employing generative models for super-resolution, giving order-of-magnitude reduced bandwidth. See new solutions for saliency segmentation delivering contextual virtual backgrounds of stuff that matters. 

Demo

Building AI-Powered Virtual Collaboration and Content Creation Solutions with NVIDIA Maxine
With new state-of the-art AI features for video, audio, and augmented reality—including AI face codec, eye contact, super resolution, noise removal, and more—NVIDIA Maxine is reinventing virtual collaboration on PCs, in the data center, and in the cloud. 

Reinvent Video Conferencing, Content Creation & Streaming with AI Using NVIDIA Maxine
Developers from video conferencing, content creation and streaming providers such as Notch, Headroom, Be.Live, and Touchcast are using the Maxine SDK to create real-time video-based experiences easily deployed to PCs, data centers or in the cloud.

Categories
Misc

Data Center Networking – Top Resources from GTC 21

NVIDIA is enabling these organizations to easily develop accelerated applications and implement cybersecurity frameworks in order to deliver breakthrough networking, security, and storage performance with a comprehensive, open development platform.

As organizations embrace cloud and edge computing models, they are looking for more efficient, modern computing architectures that create a secure, accelerated, virtual private cloud (SA-VPC), able to support multi-tenancy and deliver applications at data center scale with all the necessary levels of performance and cyber protection. NVIDIA is enabling these organizations to easily develop accelerated applications and implement cybersecurity frameworks in order to deliver breakthrough networking, security, and storage performance with a comprehensive, open development platform.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform here

 

On-Demand Session

Program Data Center Infrastructure Acceleration with the Release of DOCA and the Latest DPU Software
Speakers: Ariel Kit, Director of Product Marketing for Networking, NVIDIA  and Ami Badani, Vice President of Marketing NVIDIA

DPU experts Ami Badani and Ariel Kit discuss how NVIDIA DOCA is enabling new infrastructure acceleration and management features in BlueField all while simplifying programming and application integration.

Morpheus: AI Inferencing for Cybersecurity Pipelines
Speaker: Bartley Richardson, NVIDIA

What does NVIDIA Morpheus mean for the future of the data center and cloud security? Take a deep-dive into the newly announced AI cybersecurity framework with engineer manager, Bartley Richardson by watching this on demand GTC 21 session.

SDK

NVIDIA DOCA
Develop applications with breakthrough networking, security, and storage performance using NVIDIA DOCA — the newly released complete, open software platform.

NVIDIA Morpheus
Detect Cybersecurity threats in an Instant with NVIDIA Morpheus, a new AI framework for creating zero-trust data center security.

Click here to view all of the Data Center Networking sessions and demos on NVIDIA On-Demand.

Categories
Misc

Develop Robotics Applications – Top Resources from GTC 21

NVIDIA Isaac is a developer toolbox for accelerating the development and deployment of AI-powered robots. The SDK includes Isaac applications, GEMs (robot capabilities), a Robot Engine, and Isaac Sim.

Isaac SDK is the robotics platform for accelerating the development and deployment of robotics applications. The SDK is the toolkit which is GPU-optimized for AI and computer vision applications, including perception, navigation, and manipulation features enabled by AI.

Isaac Sim leverages the powerful NVIDIA Omniverse to build the next generation of robotics and AI simulator. Start building virtual robotic worlds and experiments, supporting navigation and manipulation applications through the Isaac SDK with RGB-D, lidar and inertial measurement unit (IMU) sensors, domain randomization, ground truth labeling, segmentation, and bounding boxes.

Here are some resources to introduce you to the Isaac platform.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform here.

On-Demand Sessions

Sim-to-Real in Isaac Sim
Speakers: Hai Loc Lu, Lead System Software Engineer, NVIDIA; Michael Gussert, Deep Learning Engineer, NVIDIA

Learn how to train and test robots in virtual environments with Isaac Sim on Omniverse, then transfer to physical Jetson powered robots.

Isaac Gym: End-to-End GPU-Accelerated Reinforcement Learning
Speakers: Gavriel State, Senior Director for Simulation and AI, NVIDIA; Lukasz Wawrzyniak, Senior Engineer, NVIDIA

Isaac Gym is NVIDIA’s environment for high-performance reinforcement learning on GPUs. We will review key API features, demonstrate examples of training agents, and provide updates on future integration of Isaac Gym functionality within the NVIDIA Omniverse platform. We will demonstrate how to create environments with thousands of agents to train in parallel, and how the Isaac Gym system allows developers to create tensor based views of physics state for all environments. We will also demonstrate the application of physics based domain randomization in Isaac Gym, which can help with sim2real transfer of learned policies to physical robots.

Bridging Sim2Real Gap: Simulation Tuning for Training Deep Learning Robotic Perception Models
Speaker: Peter Dykas, Solutions Architect, NVIDIA

Deep neural networks enable accurate perception for robots. Simulation offers a way to train deep learning robotic perception models that were previously not possible in scenarios where it is prohibitively expensive, time-consuming, or infeasible to collect large labeled datasets. We’ll dive into how NVIDIA is bridging the gap between simulation and reality with domain randomization, photorealistic simulation, and accurate physics imitation with Isaac Sim, and more.

Docs

NVIDIA Carter
Carter is a robot developed as a platform to demonstrate the capabilities of the Isaac SDK. It is based on a differential drive and uses lidar and a camera to perceive the world. This document walks you through hardware assembly and software setup for Carter.

Getting Started Tutorials and Sample Applications
Over 30 tutorials and samples provided with Isaac SDK to get you started.

Click here to view more Isaac SDK sessions on NVIDIA On-Demand.

Categories
Misc

Automotive – Top Resources from GTC 21

The annual DRIVE Developer Days was held during GTC 2021, featuring a series of specialized sessions on AV development led by NVIDIA experts. Learn about perception, mapping, simulation and more anytime with NVIDIA On-Demand.

The annual DRIVE Developer Days was held during GTC 2021, featuring a series of specialized sessions on autonomous vehicle hardware and software, including perception, mapping, simulation and more, all led by NVIDIA experts. These sessions are now available to view anytime with NVIDIA On-Demand.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free in order to get access to the tools and training necessary to build on NVIDIA’s technology platform here

 

On-Demand Sessions

DRIVE AGX Hardware Update with NVIDIA Orin

Speaker: Gary Hicok, Senior Vice President, Hardware and Systems, NVIDIA

This session will provide an early look at the next generation of DRIVE AGX hardware platforms based on the upcoming NVIDIA Orin SoC.

 

Turbocharge Autonomous Vehicle Development with DRIVE OS and DriveWorks

Speaker: Stephen Jones, Product Line Manager & Hope Allen, Product Manager, DriveWorks, NVIDIA

Learn how NVIDIA DRIVE OS and DriveWorks turbocharge autonomous vehicle development, delivering foundational autonomous tools and functional safety while simultaneously optimizing NVIDIA DRIVE AGX compute performance.

 

DRIVE AV Perception Overview

Speaker: Chongyu Wang, Product Manager

The ability to interpret a scene with 360° awareness is a critical function of an autonomous vehicle. In this session, we highlight the NVIDIA DRIVE AV Perception software stack, including an architecture overview and our latest algorithmic results.

 

Mapping and Localization with DRIVE AV

Speaker: Rambo Jacoby, Principal Product Manager, NVIDIA

The use of HD maps is a key part of ensuring a safe and comfortable journey. In this session, we’ll provide an overview of NVIDIA’s end-to-end solution for creating and maintaining crowdsourced HD maps, and how they’re used for vehicle localization.

 

A Modular Approach to AV Planning and Control

Speaker: Alexey Baranov, Senior Product Manager, NVIDIA

Planning and control executes maneuvers using input from perception, prediction, and mapping. In this session, we review the NVIDIA DRIVE AV modular approach to planning and control software and the variety of capabilities it enables.

 

Leveraging EGX and DGX for Developing AV Platforms and Supporting Connected Services

Speaker: Rambo Jacoby, Principal Product Manager, NVIDIA

In this session, we look at how NVIDIA DGX and NVIDIA EGX and are used to create the network of data centers and edge devices necessary for developing an AV platform and delivering functionality and connected services to vehicles of the future.

 

Automated Testing at Scale to Enable Deployment of Autonomous Vehicles

Speaker: Justyna Zander, Global Head of Verification and Validation, NVIDIA

In this session, we discuss the use of simulation and computing infrastructure for AV development. We also demonstrate a scalable and automated set of solutions for end-to-end testing to enable AV deployment on the road, according to safety standards.

 

NVIDIA DRIVE Sim and Omniverse

Speaker: Matt Cragun, Senior Product Manager, AV Simulation, NVIDIA

This session covers the use of simulation and computing infrastructure for AV development. We also demonstrate a scalable and automated set of solutions for end-to-end testing to enable AV deployment on the road, according to safety standards.

 

Click here to view all of the Automotive sessions and demos on NVIDIA On-Demand.