Hey everyone, I hope you are having a fabulous day!
I am trying to train a ResNet50 model on a dataset represented by a tensorflow.python.keras.utils.data_utils.Sequence that I created. When I run the code, I keep getting errors saying that my dataset/generator is returning Nones instead of images and I would really like some help figuring out why and how to fix it.
I posted the issue on Stack Overflow, which seems unusually quiet.
Could you guys please help me out?
Thanks a bunch for your time and efforts in advance!
We are pleased to announce that Ray Tracing Gems II, the follow up to 2019’s Ray Tracing Gems, will be available for digital download and print on August 4th, 2021.
We are pleased to announce that Ray Tracing Gems II, the follow up to 2019’s Ray Tracing Gems, will be available for digital download and print on August 4th, 2021.
Today, as nearly every hardware vendor and 3D software platform embraces ray tracing, it is clear that real-time ray tracing is here to stay. Ray Tracing Gems II brings the community of rendering experts back together again to unearth true “gems” for developers of games, architectural applications, visualizations, and more in this exciting new era of real-time rendering. Rendering experts share their knowledge by explaining everything from basic ray tracing concepts geared toward beginners all the way to full ray tracing deployment in shipping AAA games.
Just like the first book, the digital edition of Ray Tracing Gems II will be freeto download and the print edition will be available for purchase from Apress and Amazon.
Sharing and using knowledge widely and freely is an important pillar of the Ray Tracing Gems series. We are pleased to announce that Ray Tracing Gems II will be “Open Access”, under the terms of the Creative Commons Attribution 4.0 International License (CC-BY-NC-ND) “which permits use, duplication, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and the source, a link is provided to the Creative Commons license, and any changes made are indicated”.
In the coming months, leading up to the book’s release in August, we’ll be sharing a preview of some of the incredible content coming in Ray Tracing Gems II. In the meantime, you can get the original Ray Tracing Gems (in hardcover from Apress or as a free digital download) here.
Posted by Anthony Francis, Software Engineer and Alexander Toshev, Staff Research Scientist, Google Research
Computer vision has significantly advanced over the past decade thanks to large-scale benchmarks, such as ImageNet for image classification or COCO for object detection, which provide vast datasets and criteria for evaluating models. However, these traditional benchmarks evaluate passive tasks in which the emphasis is on perception alone, whereas more recent computer vision research has tackled active tasks, which require both perception and action (often called “embodied AI”).
The First Embodied AI Workshop, co-organized by Google at CVPR 2020, hosted several benchmark challenges for active tasks, including the Stanford and Google organized Sim2Real Challenge with iGibson, which provided a real-world setup to test navigation policies trained in photo-realistic simulation environments. An open-source setup in the challenge enabled the community to train policies in simulation, which could then be run in repeatable real world navigation experiments, enabling the evaluation of the “sim-to-real gap” — the difference between simulation and the real world. Many research teams submitted solutions during the pandemic, which were run safely by challenge organizers on real robots, with winners presenting their results virtually at the workshop.
Interactive Navigation: In a cluttered environment, an agent navigating to a goal must physically interact with objects to succeed. For example, an agent should recognize that a shoe can be pushed aside, but that an end table should not be moved and a sofa cannot be moved.
Social Navigation: In a crowded environment in which people are also moving about, an agent navigating to a goal must move politely around the people present with as little disruption as possible.
New Features of the iGibson 2021 Dataset To facilitate research into techniques that address these problems, the iGibson Challenge 2021 dataset provides simulated interactive scenes for training. The dataset includes eight fully interactive scenes derived from real-world apartments, and another seven scenes held back for testing and evaluation.
iGibson provides eight fully interactive scenes derived from real-world apartments.
To enable interactive navigation, these scenes are populated with small objects drawn from the Google Scanned Objects Dataset, a dataset of common household objects scanned in 3D for use in robot simulation and computer vision research, licensed under a Creative Commons license to give researchers the freedom to use them in their research.
Participating in the Challenge The iGibson Challenge has launched and its leaderboard is open in the Dev phase, in which participants are encouraged to submit robotic control to the development leaderboard, where they will be tested on the Interactive and Social Navigation challenges on our holdout dataset. The Test phase opens for teams to submit final solutions on May 16th and closes on May 31st, with the winner demo scheduled for June 20th, 2021. For more details on participating, please check out the iGibson Challenge Page.
Acknowledgements We’d like to thank our colleagues at at the Stanford Vision and Learning Lab (SVL) for working with us to advance the state of interactive and social robot navigation, including Chengshu Li, Claudia Pérez D’Arpino, Fei Xia, Jaewoo Jang, Roberto Martin-Martin and Silvio Savarese. At Google, we would like to thank Aleksandra Faust, Anelia Angelova, Carolina Parada, Edward Lee, Jie Tan, Krista Reyman and the rest of our collaborators on mobile robotics. We would also like to thank our co-organizers on the Embodied AI Workshop, including AI2, Facebook, Georgia Tech, Intel, MIT, SFU, Stanford, UC Berkeley, and University of Washington.
To better connect venture capitalists with NVIDIA and promising AI startups, we’ve introduced the NVIDIA Inception VC Alliance. This initiative, which VCs can apply to now, aims to fast-track the growth for thousands of AI startups around the globe by serving as a critical nexus between the two communities. AI adoption is growing across industries Read article >
NVIDIA today announced that Durham University’s new COSMA-8 supercomputer — to be used by world-leading cosmologists in the UK to research the origins of the universe — will be accelerated by NVIDIA® HDR InfiniBand networking.
Cloud-native supercomputing is the next big thing in supercomputing, and it’s here today, ready to tackle the toughest HPC and AI workloads. The University of Cambridge is building a cloud-native supercomputer in the UK. Two teams of researchers in the U.S. are separately developing key software elements for cloud-native supercomputing. The Los Alamos National Laboratory, Read article >
Scientific discovery powered by supercomputing has the potential to transform the world with research that benefits science, industry and society. A new open, cloud-native supercomputer at Cambridge University offers unrivaled performance that will enable researchers to pursue exploration like never before. The Cambridge Service for Data Driven Discovery, or CSD3 for short, is a UK Read article >
Unity made real-time ray tracing available to all of their developers in 2019 with the release of 2019LTS. Before the end of 2021, NVIDIA DLSS will be natively supported for HDRP in Unity 2021.2.
AI Super Resolution Tech Available Later This Year
Unity made real-time ray tracing available to all of their developers in 2019 with the release of 2019LTS. Before the end of 2021, NVIDIA DLSS (Deep Learning Super Sampling) will be natively supported for HDRP in Unity 2021.2. NVIDIA DLSS uses advanced AI rendering to produce image quality that’s comparable to native resolution–and sometimes even better–while only conventionally rendering a fraction of the pixels. With real-time ray tracing and NVIDIA DLSS, Unity developers will be able to create beautiful real-time ray traced worlds running at high frame rates and resolutions on NVIDIA RTX GPUs. DLSS also provides a substantial performance boost for traditional rasterized graphics.
While ray tracing produces far more realistic images than rasterization, it also requires a lot more computation, which then leads to lower framerates. NVIDIA’s solution is to ray trace fewer pixels and use AI on our dedicated Tensor Core units to intelligently scale up to a higher resolution, and while doing so, significantly boost framerates. We built a supercomputer to train the DLSS deep neural net with extremely high quality 16K offline rendered images of many kinds of content. Once trained, the model can be integrated into the core DLSS library, the game itself or even downloaded by NVIDIA’s Game Ready driver.
At runtime, DLSS takes three inputs: 1) a low-resolution, aliased image 2) motion vectors for the current frame, and 3) the high resolution previous frame. From those inputs, DLSS composes a beautifully sharp high-resolution image, to which post-processing and UI/HUD is then applied. You get the performance headroom you need to maximize ray tracing settings and increase output resolution.
At GTC 2021, Light Brick Studio demonstrated how stunning Unity games can look when real-time ray tracing and DLSS are combined. Watch their full talk for free here.
We are excited to share over a dozen new and updated developer tools released today at GTC for game developers, including NVIDIA Reflex, RTXDI, and our new RTX Technology Showcase.
SDK Updates For Game Developers and Digital Artists
GTC is a great opportunity to get hands-on with NVIDIA’s latest graphics technologies. Developers can apply now for access to RTX Direct Illumination (RTXDI), the latest advancement in real-time ray tracing. Nsight Perf, the next in a line of developer optimization tools, has just been made available to all members of the NVIDIA Developer Program. In addition, several exciting updates to aid game development and professional visualization were announced for existing SDKs.
REAL TIME RAY TRACING MADE EASIER
RTX Direct Illumination (RTXDI)
Imagine adding millions of dynamic lights to your game environments without worrying about performance or resource constraints. RTXDI makes this possible while rendering in real time.
Geometry of any shape can now emit light and cast appropriate shadows: Tiny LEDs. Times Square billboards. Even exploding fireballs. RTXDI easily incorporates lighting from user-generated models. And all of these lights can move freely and dynamically.
In this scene, you can see neon signs, brake lights, apartment windows, store displays, and wet roads all acting as independent light sources. All of that can now be captured in real-time with RTXDI.
RTXDI removes the limits on the amount of lights artists can put in a scene. Artists no longer have to cheat, or make painful decisions about which lights matter, and which ones don’t. They can light scenes completely unconstrained by anything but their creative vision. Developers can apply for access to RTXDI here.
RTX Global Illumination (RTXGI)
Leveraging the power of ray tracing, the RTX Global Illumination (RTXGI) SDK provides scalable solutions to compute multi-bounce indirect lighting without bake times, light leaks, or expensive per-frame costs. Version 1.1.30 allows developers to enable, disable, and rotate individual DDGI volumes. The RTXGI plugin comes pre-installed on the latest version of NVRTX, which can be found here. Developers can apply for general access to RTXGI here.
NVIDIA Real Time Denoiser (NRD)
NRD is a spatio-temporal API-agnostic denoising library that’s designed to work with low ray-per-pixel signals. In version 2.0, a high frequency denoiser (called ReLAX) has been added to support RTXDI signals. Split screen view support is included for denoised image comparisons, dynamic flow control is accessible, and checkerboard support for ReLAX and shadow denoisers have been included. Developers can apply for access here.
NVIDIA RTX Unreal Engine Branch (NvRTX)
NvRTX is a custom UE4 branch for NVIDIA technologies on GitHub. Having custom UE4 branches on GitHub shortens the development cycle, and helps make games look more stunning. NvRTX 4.26.1 includes RTX Direct Illumination with ReLAX Denoiser in preview and RTX Global Illumination. This branch is the only place to get all NVIDIA RTX technology in one place. NvRTX also includes an application for developers to experience and play with the latest RTX technology that will continue to be updated in the future. Try it for yourself here.
IMPROVING FRAME RATES AND RESPONSIVENESS INSTANTLY
Deep Learning Super Sampling (DLSS)
NVIDIA DLSS is a new and improved deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games. It gives you the performance headroom to maximize ray tracing settings and increase output resolution. DLSS is powered by dedicated AI processors on RTX GPUs called Tensor Cores. It is now available as a plugin for Unreal Engine 4.26; the latest version can be found at NVIDIA Developer or Unreal Marketplace.
Unity has announced that DLSS will be natively supported in Unity Engine version 2021.2 later this year. Learn more here.
Reflex
Reflex SDK allows developers to implement a low latency mode that aligns game engine work to complete just-in-time for rendering, eliminating GPU render queue and reducing CPU back pressure. Reflex 1.4 introduces a new boost feature that further reduces latency when a game becomes CPU render thread bound. In addition, the flash indicator was added to the Unity Plugin, making it easier to begin measuring latency.
Nsight Perf SDK
Nsight Perf is a graphics profiling toolbox for DirectX, Vulkan, and OpenGL, enabling you to collect GPU performance metrics directly from your application. Profile while you’re in-application, upgrade your CI/CD, and be one with the GPU.
Nsight Graphics is a standalone developer tool that enables you to debug, profile, and export frames built with DirectX12, Vulkan, OpenGL, and OpenVR. In version 2021.2, we’re introducing Trace Analysis; a powerful new GPU Trace feature that provides developers detailed information on where in your frame you should focus on in order to improve your application’s performance. In addition to Trace Analysis, GPU trace can now show sample values in addition to percentages. We’ve also improved window docking to provide more ways for you to configure them (especially in multi-monitor setups). For captures, you can now specify which swap chain you want to use, making the Nsight Graphics easier to use on applications that have multiple windows/swap chains (such as level editors).
All of the powerful debugging and profiling features in Nsight Graphics are available for realtime ray tracing, which includes support for DXR and Vulkan Ray Tracing. Watch this short video to see how you can leverage Nsight Graphics to improve your developer productivity and ensure that your game is fast and visually breathtaking.
Nsight Systems
Nsight Systems is a system-wide performance analysis tool designed to visualize an application’s algorithms, help you identify the largest opportunities to optimize, and tune to scale efficiently across any quantity or size of CPUs and GPUs, from large servers to our smallest SoC. Version 2021.2 includes CUDA UVM CPU & GPU Page faults, Reflex SDK trace and GPU Metrics Sampling providing a system wide overview of efficiency for your GPU workloads. This expands Nsight Systems ability to profile system-wide activity and help track GPU workloads and their CPU origins. By providing a deeper understanding of the GPU utilization over multiple processes and contexts; covering the interop of Graphics and Compute workloads including CUDA, OptiX, DirectX and Vulkan ray tracing + rasterization APIs. Download the latest version here.
CREATING AND SIMULATING PHOTO-REALISTIC GRAPHICS
OptiX
OptiX is an application framework for achieving optimal ray tracing performance on the GPU. It provides a simple, recursive, and flexible pipeline for accelerating ray tracing algorithms. Optix 7.3 enables object loading from disk, freeing up the GPU and making developers less reliant on the CPU. This update also brings improvements to denoising capabilities for objects in motion while improving the real time performance of Curves. Download OptiX today here.
Images courtesy: Zhelong Xu, Adobe Substance, LeeGriggs, Autodesk Arnold, Mondlicht Studios, Chaos V-Ray, Madis Epler, Otoy Octane, Oly Stingel, Redshift, Siemens Digital Industries Software
NanoVDB
NanoVDB adds real-time rendering GPU support for OpenVDB. OpenVDB is the Academy Award-winning industry standard data-structure and toolset used for manipulating volumetric effects. The latest version of NanoVDB offers a significant footprint reduction to the GPU memory, freeing up resources for other tasks.
Texture Tools Exporter
Version 2021.1.1 of the NVIDIA Texture Tools Exporter brings AI-powered NGX Image Super-Resolution, initial support for the KTX and KTX2 file formats including Zstandard supercompression, resizing and high-DPI windowing, and more. You can get access to the latest version here.
Omniverse Audio2Face
NVIDIA Omniverse Audio2Face is now available in open beta. With the Audio2Face app, Omniverse users can generate AI-driven facial animation from audio sources. The beta release of Audio2Face includes the highly anticipated ‘character transfer’ feature, enabling users to retarget animation onto a custom 3D facial mesh.
Reallusion Character Creator Connector
With Character Creator 3 and Omniverse, individuals or design teams can create and deploy digital characters as task performers, virtual hosts, or citizens for simulations and visualizations.
The Connector adds the power of a full character generation system with motions and unlimited creative variations to Omniverse:
Character Creator digital humans can be chosen from a library or custom creation can begin with highly morphable, fully-rigged bases allowing creators of all skill levels a way to easily design characters from scratch.
Character Creator Headshot, SkinGen, and Smart Hair all allow for detailed character definition from head to toe.
Omniverse users can transfer characters and motions from Character Creator with the Omniverse Exporter for a solution that is easy to learn and deploy digital humans for Omniverse Create and Omniverse Machinima.
This new Connector adds a complete digital human creation pipeline to any Omniverse-based application.
This GTC, we unveiled a series of new Omniverse Connectors – plugins to third-party applications – including Autodesk 3ds Max, GRAPHISOFT Archicad, Autodesk Maya, Adobe Photoshop, Autodesk Revit, McNeel & Associates Rhino including Grasshopper, Trimble SketchUp, Substance Designer, Substance Painter, and Epic Games Unreal Engine 4.
Alongside this, we have a boatload of new connectors in the works, and some of the ones that will be coming soon are Blender, Reallusion Character Creator 3, SideFX Houdini, Marvelous Designer, Autodesk Motionbuilder, Paraview, OnShape, DS SOLIDWORKS, Substance Source, and many, many more.