Accelerate your AI-based simulations using NVIDIA Modulus. The 22.07 release brings advancements with weather modeling, novel network architectures, geometry modeling, performance, and more.
Accelerate your AI-based simulations using NVIDIA Modulus. The 22.07 release brings advancements with weather modeling, novel network architectures, geometry modeling, and more—plus performance improvements.
Visual effects savant Surfaced Studio steps In the NVIDIA Studio this week to share his clever film sequences, Fluid Simulation and Destruction, as well as his creative workflows. These sequences feature quirky visual effects that Surfaced Studio is renowned for demonstrating on his YouTube channel.
Learn how the PennyLane lightning.gpu device uses the NVIDIA cuQuantum software development kit to speed up the simulation of quantum circuits.
Discover how the new PennyLane simulator device, lightning.gpu, offloads quantum gate calls to the NVIDIA cuQuantum software development kit to speed up the simulation of quantum circuits.
The release by U.S. President Joe Biden Monday of the first full-color image from the James Webb Space Telescope is already astounding — and delighting — humans around the globe. “We can see possibilities nobody has ever seen before, we can go places nobody has ever gone before,” Biden said during a White House press Read article >
In this new course learn about creating software-defined, cloud-native, DPU-accelerated services with zero-trust protection for increasing the performance and security demands of modern data centers.
In this new course learn about creating software-defined, cloud-native, DPU-accelerated services with zero-trust protection for increasing the performance and security demands of modern data centers.
Ken Jee, a data science professional, shares insights on leveraging university resources, benefits of content creation, and useful learning methods for AI topics.
Ken Jee is a data scientist and YouTube content creator who has quickly become known for creating engaging and easy-to-follow videos. Jee has helped countless people learn about data science, machine learning, and AI and is the initiator of the popular #66daysofdata movement.
Currently, Jee works as the Head of Data Science at Scouts Consulting Group. In this post, he discusses his work as a data scientist and offers advice for anyone looking to enter the field. We explore the importance of university education, the relevancy of math for data scientists, creating visibility within the industry, and the value of an open mind when it comes to new technologies.
This post is a transcription of bits and pieces of Jee’s wisdom with which I had the pleasure to speak to on my podcast. At the conclusion of this article, you’ll find a link to the entire discussion. While there has been numerous editing in the answers provided by Jee to ensure brevity and conciseness, the intentions of his answers are maintained.
Why did you start making data science videos on YouTube?
I started making data science videos on YouTube because I didn’t see the resources that I was looking for when I was trying to learn data science.
I also saw making videos as the best way to improve my communication skills. Creating content has given me a competitive advantage because it has attracted employers to me rather than going out to get them. I usually refer to this as the concept of content gravity. The more content that I create, the more pull I have on employers and opportunities coming to me.
I love working on interesting data projects and creating easy-to-digest content that can help others learn and grow. I believe that data science skills are valuable and shareable and that data-driven content has a great potential to go viral. Companies should encourage their employees to have side hustles and be public about them, as it looks good for the company.
I see a future where everyone uses social media to share their work and ideas and where this is accepted and expected in most roles. In some of my previous job roles, I’ve been referred to as “the guy who makes YouTube videos.” My external efforts outside of work have aided my internal visibility within companies.
How did you become interested in data science?
I became interested in data science because I wanted to improve my golfing skills. I started to explore how data could help me analyze my performance and find ways to improve. I soon discovered that I had a unique advantage: the ability to analyze data and create data-driven actions to improve my golfing abilities. This led me to explore further other performance improvement methods supported by data and intelligence.
How essential is mathematics in data science?
I believe that mathematics is less important when breaking into the data science field. What’s important is getting your hands dirty and coding. I recommend that people get their hands dirty by building projects and coding, as this will help them intuitively find where the math is valuable and important.
I also recommend reviewing calculus, linear algebra, and discrete math, but only once you have a reason to do so and understand how they are relevant to data science. As you continue to progress within the field, you will gradually learn where math skills are important and relevant. And once you see the value that they bring, you will be more motivated to learn them.
Is self-directed learning more important than a formal degree when entering the data science field?
One of the primary reasons I encourage people to investigate unusual learning methods, as opposed to attending a university, is that many students underutilize the resources available at institutions. I used all of my office hours with professors and asked questions from PhDs who knew a lot about subjects, but I discovered very few students did the same.
In my opinion, having a degree is only useful if you put in the effort and make the most of the available opportunities. I recommend taking advantage of other options available at university, such as side projects. Doing so can help students get the most out of their education and give them an edge in the job market. However, I warn that simply getting a degree does not guarantee a successful career.
Editor’s Note: Jee contributes to the data science learning platform365DataScience, educating learners on starting a successful data science career. He also has a master’s degree in computer science and another in business, marketing, and management. Jee holds a bachelor’s degree in economics.
Obtaining a master’s degree in an advanced subject such as data science is not always the best method to stand out. Having an impressive portfolio, unique work or volunteer experience can be more valuable.
It is worth considering if you can invest the time and money into obtaining a master’s degree as it is undoubtedly a viable resource. But it’s also important to consider the opportunity cost of returning to school to land a job. So, it’s financially practical to view attending graduate school to obtain a particular role within AI as an opportunity cost. You essentially must determine if attending grad school will provide a good return on investment.
How do you learn?
I learn best by struggling through something on my own at my own pace, rereading the same thing over and over again until I understand it. In grad school, I fell in love with reading, and the majority of my knowledge came from textbooks.
I recommend looking at things from different angles to get a diverse understanding of a topic. One of the most important keys to accelerating learning is finding a suitable medium that explains the topic in a way that makes sense to you, this could be reading a blog post, watching a video, or listening to a podcast.
Although my primary method of obtaining knowledge in grad school was through books, I admit that my learning of data science concepts and topics today involves videos and YouTube tutorials. Specifically, I want to mention the popular data science YouTube channel StatQuest with Josh Starmer.
What are the best skills to differentiate yourself as a data scientist?
Data scientists have to learn coding, math, and business in order to be successful. I differentiated myself from the competition with my unique combination of skills. My business knowledge and ability to meet the strategic requirement for coding and data science made me a highly desirable candidate. My resume and portfolio stood out from the competition. Additionally, my communication skills and business knowledge gave me a distinct edge in job interviews.
How did you become the head of data science at your current company?
I discovered that I didn’t fit well into corporate bureaucracy very early on. My focus was on creating value, getting noticed for adding value and getting work satisfaction. My title has progressed from data scientist to head of data science. I am now responsible for all data-related work and have taken on the role of Director of Data Science.
This change reflects the increased responsibilities that I have taken very early on within my current company, from solely being responsible for all data science activities to managing teams of data scientists. If you are looking for a job, I recommend that you create your opportunities by reaching out to potential employers.
You may be surprised at how open they are to hiring you if they see that you are willing and able to do the work. I advise data science practitioners to find a position that doesn’t yet exist or make one for themselves. This way, you can skip the line and get to where you want to be without waiting for opportunities.
What is your advice to entry-level data scientists?
Entry-level data scientists should share their work and journey with others. People are hesitant to produce content because they are afraid of being judged, but this is not usually the case. People are more likely to be positive and supportive. I also recommend learning to code first, as this is a valuable skill for data scientists. However, I recognize that everyone learns differently, so this is not a one-size-fits-all approach.
Summary from the author
Jee’s journey within data science is unique, but the steps that led to his success are replicable and adaptable to your data science career. My discussion with him revealed the importance of using digital content to communicate your expertise and presence within the data science field, which can sometimes be filled with noise. His advice to data science practitioners is to focus on creating value and making sure that you’re learning continuously to keep up with the rapidly changing field. So whatever your goals are for your data science career, don’t forget to enjoy the journey and document it along the way!
You can watch or listen to the entire conversation with Ken Jee on YouTube or Spotify.
Learn how NVIDIA and Azure together enable global on-demand access to the latest GPUs and developer solutions to build, deploy, and scale AI-powered services.
Learn how NVIDIA and Azure together enable global on-demand access to the latest GPUs and developer solutions to build, deploy, and scale AI-powered services.
Engineers are using the NVIDIA Omniverse 3D simulation platform as part of a proof of concept that promises to become a model for putting green energy to work around the world. Dubbed Gigastack, the pilot project — led by a consortium that includes Phillips 66 and Denmark-based renewable energy company Ørsted — will create low-emission Read article >
Announcing our first Omniverse developer contest for building an Omniverse Extension. Show us how you’re extending Omniverse to transform 3D workflows and virtual worldbuilding.
Developers across industries are building 3D tools and applications to help teams create virtual worlds in art, design, manufacturing, and more. NVIDIA Omniverse, an extensible platform for full fidelity design, simulation, and developing USD-based workflows, has an ever-growing ecosystem of developers building Python-based extensions. We’ve launched contests in the past for building breathtaking 3D simulations using the Omniverse Create app.
Today, we’re announcing our first NVIDIA Omniverse contest specifically for developers, engineers, technical artists, hobbyists, and researchers to develop Python tools for 3D worlds. The contest runs from July 11 to August 19, 2022. The overall winner will be awarded an NVIDIA RTX A6000, and the runners-up in each category will win a GeForce RTX 3090 Ti.
The challenge? Build an Omniverse Extension using Omniverse Kit and the developer-centric Omniverse application Omniverse Code. Contestants can create Python extensions in one of the following categories for the Extend the Omniverse contest:
Layout and scene authoring tools
Omni.ui with Omniverse Kit
Scene modifier and manipulator tools
Layout and scene authoring tools
The demand for 3D content and environments is growing exponentially. Layout and scene authoring tools help scale workflows for world-building, leveraging rules-based algorithms and AI to generate assets procedurally.
Instead of tediously placing every component by hand, creators can paint in broader strokes and automatically generate physical objects like books, lamps, or fences to populate a scene. With the ability to iterate layout and scenes more freely, creators can accelerate their workflows and free up time to focus on creativity.
Universal Scene Description (USD) is at the foundation of layout and scene authoring tools contestants can develop in Omniverse. The powerful, easily extensible scene description handles incredibly large 3D datasets without skipping a beat—enabling creating, editing, querying, rendering, and collaboration in 3D worlds.
Video 1. How to build a tool using Omniverse Code that programmatically creates a scene
Omni.ui with Omniverse Kit
Well-crafted user interfaces provide a superior experience for artists and developers alike. They can boost productivity and enable nontechnical and technical users to harness the power of complex algorithms.
Building custom user interfaces has never been simpler than with Omni.ui, Omniverse’s UI toolkit for creating beautiful and flexible graphical UI design. Omni.ui was designed using modern asynchronous technologies and UI design patterns to be reactive and responsive.
Using Omniverse Kit, you can deeply customize the final look of applications with widgets for creating visual components, receiving user input, and creating data models. With its style sheet architecture that feels akin to HTML or CSS, you can change the look of your widgets or create a new color scheme for an entire app.
Existing widgets can be combined and new ones can be defined to build the interface that you’ve always wanted. These extensions can range from floating panels in the navigation bar to markup tools in Omniverse View and Showroom. You can also create data models, views, and delegates to build robust and flexible interfaces.
Video 2. How to use Omniverse Kit and Omni.ui, the toolkit to create custom UIs in Python
Scene modifier and manipulator tools
Scene modifier and manipulator tools offer new ways for artists to interact with their scenes. Whether it’s changing the geometry of an object, the lighting of a scene, or creating animations, these tools enable artists to modify and manipulate scenes with limited manual work.
Using omni.ui.scene, Omniverse’s low-code module for building UIs in 3D space, you can develop 3D widgets and manipulators to create and move shapes in a 3D projected scene with Python. Many primitive objects are available, including text, image, rectangle, arc, line, curve, and mesh, with more regularly being added.
Video 3. How to build a scene modifier tool in Omniverse
We can’t wait to see what extensions you’ll create to contribute to the ecosystem of extensions that are expanding what’s possible in the Omniverse. Read more about the contest, or watch the video below for a step-by-step guide on how to enter. You can also visit the GitHub contest page for sample code and other resources to get started.
Video 4. How to submit to the contest
Don’t miss these upcoming events:
Join the Omniverse community on Discord July 13, 2022 for the Getting Started – #ExtendOmniverse Developer Contest livestream.
Join us at SIGGRAPH for hands-on developer labs where you can learn how to build extensions in Omniverse.
Learn more in the Omniverse Resource Center, which details how developers can build custom applications and extensions for the platform.
A breakthrough in the simulation and learning of contact-rich interactions provides tools and methods to accelerate robotic assembly and simulation research.
NVIDIA robotics and simulation researchers presented Factory: Fast Contact for Robotic Assembly at the 2022 Robotics: Science and Systems (RSS) conference. This work is a novel breakthrough in the simulation and learning of contact-rich interactions, which are ubiquitous in robotics research. Its aim is to greatly accelerate research and development in robotic assembly, as well as serve as a powerful tool for contact-rich simulation of any kind.
Robotic assembly: What, why, and challenges
Assembly is essential across the automotive, aerospace, electronics, and medical industries. Examples include tightening nuts and bolts, soldering, peg insertion, and cable routing.
However, robotic assembly remains one of the oldest and most challenging tasks in robotics. It has been exceptionally difficult to automate because of the physical complexity, high reliability, part variability, and high accuracy requirements.
In industry, robotic assembly methods may achieve high precision, accuracy, and reliability but often require expensive equipment and custom fixtures that can be time-consuming to set up and maintain (preprogrammed trajectories and careful tuning, for example). Tasks that involve robustness to variation (part types, appearance, and locations) and complex manipulation are frequently done using manual labor.
Research methods may achieve lower cost, higher adaptivity, and improved robustness but are often less reliable and slower.
Simulation: A tool for solving the challenges in robotic assembly
Simulation has been used for decades to verify, validate, and optimize robot designs and algorithms in robotics. This includes ensuring the safety of deploying these algorithms. It has also been used to generate large-scale datasets for deep learning, perform system identification, and develop planning and control methods.
In reinforcement learning (RL) research, we have recently seen how simulation results can be transferred to a real system. The importance of accurate physics simulation for robotics development cannot be overemphasized.
Figure 1. ANYmal Demo Training in NVIDIA Isaac Gym, a high-performance GPU-accelerated physics simulator for robot learning
Physics-based simulators like MuJoCo and NVIDIA Isaac Gym have been used to train virtual agents to perform manipulation and locomotion tasks, such as solving a Rubik’s Cube or walking on uneven terrain using ANYmal. The policies have successfully transferred to real-world robots.
However, the power of a fast and accurate simulator has not substantially impacted robotic assembly. Developing such simulators for complex bodies with different variations and motions is a difficult task.
For example, a simple nut-and-bolt assembly requires more than pure helical motion. There are finite clearances between the threads of the nut and bolt, which allow the nut to move with six degrees of freedom. Even humans require some level of carefulness to ensure that the nut has proper initial alignment with the bolt and does not get stuck during tightening.
However, simulating the task with traditional methods may require meshes with tens of thousands of triangles. Detecting collisions between these meshes, generating contact points and normals, and solving non-penetration constraints are major computational challenges.
Despite the fact that there is an abundance of threaded fasteners in the world, no existing robotics simulator is able to simulate even a single nut-and-bolt assembly in real time at the same rate as the underlying physical dynamics.
In Factory, the researchers developed methods to overcome the challenges in robotic assembly and other contact-rich interactions.
What is Factory?
Factory (Fast Contact for Robotic Assembly) is a set of physics simulation methods and robot learning tools for achieving real-time and faster simulation of a wide range of contact-rich interactions. One of the Factory applications is robotic assembly.
Factory offers the following central contributions:
A set of methods for fast, accurate physical simulation of contact-rich interactions through a novel GPU-based synthesis of signed distance function (SDF)-based collisions, contact reduction, and a Gauss-Seidel solver.
A robot learning suite consisting of:
60 high-quality assets, including a Franka robot and all rigid-body assemblies from the NIST Assembly Task Board 1, the established benchmark for robotic assembly
Three Isaac Gym-style learning environments for robotic assembly
Seven classical robot controllers
Proof-of-concept reinforcement learning policies for robots performing contact-rich tasks (a simulated Franka Robot solving the most contact-rich task on the NIST board, nut-and-bolt assembly)
The physics simulation methods in the Factory paper have been integrated into the PhysX physics engine used by Isaac Gym. The asset suite and reinforcement learning policies are available with the latest version of Isaac Gym and the Isaac Gym Environments GitHub repo. The simulation methods are also available in the Omniverse Isaac Sim simulator, with reinforcement learning examples coming later this summer.
Simulation methods and results
Using the fast GPU-based implementations of SDF collisions for objects, contact reduction algorithms for reducing contacts from the SDF collisions, and custom numerical solvers, the researchers were not only able to simulate a single M16 nut and bolt in real time but 1,024 in parallel environments and real time. This is essentially 20,000x faster than the prior state-of-the-art.
The researchers demonstrated the simulator’s performance in a wide range of challenging scenes, including the following:
512 bowls falling into a pile in the same environment
A pile of nuts fed into a feeder mechanism vibrating at 60 Hz
A Franka robot executing a hand-scripted trajectory to grasp and tighten a nut onto a bolt, with 128 instances of this environment executing in real time
Figure 2. The M16 nut-and-bolt assemblies scene, consisting of 1,024 parallel nut-and-bolt interactions executing in real time
Figure 3. The Franka robot plus M16 nut-and-bolt assemblies scene, consisting of 128 parallel Franka robots retrieving nuts from a vibratory feeder mechanism and tightening them onto a bolt
Figure 4. 1,024 bowls falling into a pile in the same environment, executing in real time
Robot learning tools
The most established benchmark for robotic assembly is the NIST assembly task board, the focus of an annual robotics competition since 2017. The NIST Task Board 1 consists of 38 unique parts. However, the CAD models provided are not ideal for physics simulations due to a lack of real-world clearances, interferences between parts, hand-derived measurements, and so on. Realistic models are hard to find.
Figure 5. The real NIST Task Board 1. Compare to the simulated board in Figure 6.
Factory uses 60 high-quality, simulation-ready part models, each with an Onshape CAD model, one or more OBJ meshes, a URDF description, and estimated material properties that conform to international standards (ISO 724, ISO 965, and ISO 286) or which are based on models sourced from manufacturers. These models include all parts on the NIST assembly Task Board 1 with dimensional variations that span real-world tolerance bands. Clearance between parts ranges from 0 to a maximum of 2.66 mm, with many parts within the 0.1-0.5 mm range.
Figure 6. Rendering of a simulated NIST Task Board 1, demonstrating the provided assets
Factory provides three robotic assembly scenes for Isaac Gym that can be used for developing planning and control algorithms, collecting simulated sensor data for supervised learning, and training RL agents. Each scene contains a Franka robot and disassembled assemblies from the NIST Task Board 1.
The assets can be randomized in types and locations across all environments. All scenes have been tested with up to 128 simultaneous environments on an NVIDIA RTX 3090 GPU. The scenes are shown below:
Figure 7. Factory robotic assembly environments
The seven robot controllers available in the learning environments include a joint-space inverse differential kinematics (IK) motion controller, a joint-space inverse dynamics (ID) controller, a task-space impedance controller, an operational space motion controller, an open-loop force controller, a closed-loop proportional force controller, and a hybrid force-motion controller.
The researchers intend that the models, environments, and controllers continuously grow with contributions from them and the community.
Proof-of-concept RL policies
Factory employs GPU-accelerated on-policy RL to solve the most contact-rich task on NIST Task Board 1: assembling a nut onto a bolt. Like many assembly tasks, such a procedure is long-horizon and challenging to learn end-to-end. The problem was separated into three phases:
Pick: The robot grasps the nut with a parallel-jaw gripper from a random location on a work surface.
Place: The robot transports the nut to the top of a bolt fixed to the surface.
Screw: The robot brings the nut into contact with the bolt, engages the mating threads, and tightens the nut until it contacts the base of the bolt head.
Figure 8. A trained robot arm picking up a nut, one of the achieved goal states of the trained subpolicies for FrankaNutBoltEnv
Figure 9. A trained robot arm placing a nut onto a bolt, one of the achieved goal states of the trained subpolicies for FrankaNutBoltEnv
Figure 10. A trained robot arm screwing a nut onto a bolt, one of the achieved goal states of the trained subpolicies for FrankaNutBoltEnv
The training was done on a single GPU. Large randomizations were applied to the initial position and orientation of the objects with a batch of 3-4 policies trained simultaneously using proximal policy optimization (PPO). Each batch takes 1-1.5 hours to train and each subpolicy is trained in over 128 environments with a maximum of 1,024 policy updates for rapid experimentation. The success rate was 98.4% at test time.
Finally, to evaluate the potential for sim-to-real transfer (transferring the policy learned in simulation to real-world robotics systems), the researchers compared the contact forces generated during these interactions in simulation to contact forces measured in the real world by humans performing the same task with a wrench. For more information, see the R-PAL Daily Interactive Manipulation (DIM) dataset.
The figure below shows that the histogram of the simulation Fasten Nut lies in the middle of the histogram of the Real Fasten Nut, which shows a strong consistency with the real-world values.
Figure 11. Comparison of simulated contact forces during screw subpolicy execution with analogous real-world contact forces from the Daily Interactive Manipulation (DIM) dataset
Conclusion and future directions
Although Factory was developed with robotic assembly as a motivating application, there are no limitations on using the methods for entirely different tasks within robotics, such as grasping complex non-convex shapes in home environments, locomotion on uneven outdoor terrain, and non-prehensile manipulation of aggregates of objects.
The future direction of this work is to realize full end-to-end simulation for complex physical interactions, including techniques for efficiently transferring the trained policies to real-world robotic systems. This can potentially minimize cost and risk, improve safety, and achieve efficient behaviors.
One day, every advanced industrial manufacturing robot might be trained in simulation using such techniques for seamless transfer to the real world.
Towards this end, NVIDIA developers are working to refine the physics simulation methods used by the Factory research so that they can be used within Omniverse Isaac Sim. Limited functionality is already present, and will become more robust over time.