Categories
Misc

You Put a Spell on Me: GFN Thursdays Are Rewarding, 15 New Games Added This Week

This GFN Thursday — when GeForce NOW members can learn what new games and updates are streaming from the cloud — we’re adding 15 games to the service, with new content, including NVIDIA RTX and DLSS in a number of games. Plus, we have a GeForce NOW Reward for Spellbreak from our friends at Proletariat. Read article >

The post You Put a Spell on Me: GFN Thursdays Are Rewarding, 15 New Games Added This Week appeared first on The Official NVIDIA Blog.

Categories
Misc

how can I save an NVIDIA StyleGan2 Ada .pkl file to .pb for use in other applications?

Title says it all. I’ve trained a sg2-ada GAN and it is saved as a pickled checkpoint, but I don’t know how to get it out.

For reference, the github with all the goods: https://github.com/dvschultz/stylegan2-ada/blob/main/train.py

submitted by /u/diditforthevideocard
[visit reddit] [comments]

Categories
Misc

Tensorflow Returning "ValueError: ‘outputs’ must be defined before the loop"

Hey everyone, I hope you are having a fabulous day!

I am trying to train a ResNet50 model on a dataset represented by a tensorflow.python.keras.utils.data_utils.Sequence that I created. When I run the code, I keep getting errors saying that my dataset/generator is returning Nones instead of images and I would really like some help figuring out why and how to fix it.

I posted the issue on Stack Overflow, which seems unusually quiet.

Could you guys please help me out?

Thanks a bunch for your time and efforts in advance!

submitted by /u/Revolutionary-Tie412
[visit reddit] [comments]

Categories
Misc

Ray Tracing Gems II: Available August 4th

We are pleased to announce that Ray Tracing Gems II, the follow up to 2019’s Ray Tracing Gems, will be available for digital download and print on August 4th, 2021.

We are pleased to announce that Ray Tracing Gems II, the follow up to 2019’s Ray Tracing Gems, will be available for digital download and print on August 4th, 2021.

Today, as nearly every hardware vendor and 3D software platform embraces ray tracing, it is clear that real-time ray tracing is here to stay. Ray Tracing Gems II brings the community of rendering experts back together again to unearth true “gems” for developers of games, architectural applications, visualizations, and more in this exciting new era of real-time rendering. Rendering experts share their knowledge by explaining everything from basic ray tracing concepts geared toward beginners all the way to full ray tracing deployment in shipping AAA games.

Just like the first book, the digital edition of Ray Tracing Gems II will be free to download and the print edition will be available for purchase from Apress and Amazon. 

Sharing and using knowledge widely and freely is an important pillar of the Ray Tracing Gems series. We are pleased to announce that Ray Tracing Gems II will be “Open Access”, under the terms of the Creative Commons Attribution 4.0 International License (CC-BY-NC-ND) “which permits use, duplication, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and the source, a link is provided to the Creative Commons license, and any changes made are indicated”.

In the coming months, leading up to the book’s release in August, we’ll be sharing a preview of some of the incredible content coming in Ray Tracing Gems II. In the meantime, you can get the original Ray Tracing Gems (in hardcover from Apress or as a free digital download) here

Adam Marrs, Peter Shirley, and Ingo Wald

Categories
Offsites

Presenting the iGibson Challenge on Interactive and Social Navigation

Computer vision has significantly advanced over the past decade thanks to large-scale benchmarks, such as ImageNet for image classification or COCO for object detection, which provide vast datasets and criteria for evaluating models. However, these traditional benchmarks evaluate passive tasks in which the emphasis is on perception alone, whereas more recent computer vision research has tackled active tasks, which require both perception and action (often called “embodied AI”).

The First Embodied AI Workshop, co-organized by Google at CVPR 2020, hosted several benchmark challenges for active tasks, including the Stanford and Google organized Sim2Real Challenge with iGibson, which provided a real-world setup to test navigation policies trained in photo-realistic simulation environments. An open-source setup in the challenge enabled the community to train policies in simulation, which could then be run in repeatable real world navigation experiments, enabling the evaluation of the “sim-to-real gap” — the difference between simulation and the real world. Many research teams submitted solutions during the pandemic, which were run safely by challenge organizers on real robots, with winners presenting their results virtually at the workshop.

This year, Stanford and Google are proud to announce a new version of the iGibson Challenge on Interactive and Social Navigation, one of the 10 active visual challenges affiliated with the Second Embodied AI Workshop at CVPR 2021. This year’s Embodied AI Workshop is co-organized by Google and nine other research organizations, and explores issues such as simulation, sim-to-real transfer, visual navigation, semantic mapping and change detection, object rearrangement and restoration, auditory navigation, and following instructions for navigation and interaction tasks. In addition, this year’s interactive and social iGibson challenge explores interactive navigation and social navigation — how robots can learn to interact with people and objects in their environments — by combining the iGibson simulator, the Google Scanned Objects Dataset, and simulated pedestrians within realistic human environments.

New Challenges in Navigation
Active perception tasks are challenging, as they require both perception and actions in response. For example, point navigation involves navigating through mapped space, such as driving robots over kilometers in human-friendly buildings, while recognizing and avoiding obstacles. Similarly object navigation involves looking for objects in buildings, requiring domain invariant representations and object search behaviors. Additionally, visual language instruction navigation involves navigating through buildings based on visual images and commands in natural language. These problems become even harder in a real-world environment, where robots must be able to handle a variety of physical and social interactions that are much more dynamic and challenging to solve. In this year’s iGibson Challenge, we focus on two of those settings:

  • Interactive Navigation: In a cluttered environment, an agent navigating to a goal must physically interact with objects to succeed. For example, an agent should recognize that a shoe can be pushed aside, but that an end table should not be moved and a sofa cannot be moved.
  • Social Navigation: In a crowded environment in which people are also moving about, an agent navigating to a goal must move politely around the people present with as little disruption as possible.

New Features of the iGibson 2021 Dataset
To facilitate research into techniques that address these problems, the iGibson Challenge 2021 dataset provides simulated interactive scenes for training. The dataset includes eight fully interactive scenes derived from real-world apartments, and another seven scenes held back for testing and evaluation.

iGibson provides eight fully interactive scenes derived from real-world apartments.

To enable interactive navigation, these scenes are populated with small objects drawn from the Google Scanned Objects Dataset, a dataset of common household objects scanned in 3D for use in robot simulation and computer vision research, licensed under a Creative Commons license to give researchers the freedom to use them in their research.

The Google Scanned Objects Dataset contains 3D models of many common objects.

The challenge is implemented in Stanford’s open-source iGibson simulation platform, a fast, interactive, photorealistic robotic simulator with physics based on Bullet. For this year’s challenge, iGibson has been expanded with fully interactive environments and pedestrian behaviors based on the ORCA crowd simulation algorithm.

iGibson environments include ORCA crowd simulations and movable objects.

Participating in the Challenge
The iGibson Challenge has launched and its leaderboard is open in the Dev phase, in which participants are encouraged to submit robotic control to the development leaderboard, where they will be tested on the Interactive and Social Navigation challenges on our holdout dataset. The Test phase opens for teams to submit final solutions on May 16th and closes on May 31st, with the winner demo scheduled for June 20th, 2021. For more details on participating, please check out the iGibson Challenge Page.

Acknowledgements
We’d like to thank our colleagues at at the Stanford Vision and Learning Lab (SVL) for working with us to advance the state of interactive and social robot navigation, including Chengshu Li, Claudia Pérez D’Arpino, Fei Xia, Jaewoo Jang, Roberto Martin-Martin and Silvio Savarese. At Google, we would like to thank Aleksandra Faust, Anelia Angelova, Carolina Parada, Edward Lee, Jie Tan, Krista Reyman and the rest of our collaborators on mobile robotics. We would also like to thank our co-organizers on the Embodied AI Workshop, including AI2, Facebook, Georgia Tech, Intel, MIT, SFU, Stanford, UC Berkeley, and University of Washington.

Categories
Misc

Accelerated Portfolios: NVIDIA Inception VC Alliance Connects Top Investors with Leading AI Startups

To better connect venture capitalists with NVIDIA and promising AI startups, we’ve introduced the NVIDIA Inception VC Alliance. This initiative, which VCs can apply to now, aims to fast-track the growth for thousands of AI startups around the globe by serving as a critical nexus between the two communities. AI adoption is growing across industries Read article >

The post Accelerated Portfolios: NVIDIA Inception VC Alliance Connects Top Investors with Leading AI Startups appeared first on The Official NVIDIA Blog.

Categories
Misc

Durham University and DiRAC’s New NVIDIA InfiniBand-Powered Supercomputer to Accelerate Our Understanding of the Universe

NVIDIA today announced that Durham University’s new COSMA-8 supercomputer — to be used by world-leading cosmologists in the UK to research the origins of the universe — will be accelerated by NVIDIA® HDR InfiniBand networking.

Categories
Misc

Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer?

Cloud-native supercomputing is the next big thing in supercomputing, and it’s here today, ready to tackle the toughest HPC and AI workloads. The University of Cambridge is building a cloud-native supercomputer in the UK. Two teams of researchers in the U.S. are separately developing key software elements for cloud-native supercomputing. The Los Alamos National Laboratory, Read article >

The post Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer? appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University

Scientific discovery powered by supercomputing has the potential to transform the world with research that benefits science, industry and society. A new open, cloud-native supercomputer at Cambridge University offers unrivaled performance that will enable researchers to pursue exploration like never before. The Cambridge Service for Data Driven Discovery, or CSD3 for short, is a UK Read article >

The post NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA DLSS Natively Supported in Unity 2021.2

Unity made real-time ray tracing available to all of their developers in 2019 with the release of 2019LTS. Before the end of 2021, NVIDIA DLSS will be natively supported for HDRP in Unity 2021.2.

AI Super Resolution Tech Available Later This Year

Unity made real-time ray tracing available to all of their developers in 2019 with the release of 2019LTS. Before the end of 2021, NVIDIA DLSS (Deep Learning Super Sampling) will be natively supported for HDRP in Unity 2021.2. NVIDIA DLSS uses advanced AI rendering to produce image quality that’s comparable to native resolution–and sometimes even better–while only conventionally rendering a fraction of the pixels. With real-time ray tracing and NVIDIA DLSS, Unity developers will be able to create beautiful real-time ray traced worlds running at high frame rates and resolutions on NVIDIA RTX GPUs. DLSS  also provides a substantial performance boost for traditional rasterized graphics.  

While ray tracing produces far more realistic images than rasterization, it also requires a lot more computation, which then leads to lower framerates. NVIDIA’s solution is to ray trace fewer pixels and use AI on our dedicated Tensor Core units to intelligently scale up to a higher resolution, and while doing so, significantly boost framerates. We built a supercomputer to train the DLSS deep neural net with extremely high quality 16K offline rendered images of many kinds of content. Once trained, the model can be integrated into the core DLSS library, the game itself or even downloaded by NVIDIA’s Game Ready driver. 

At runtime, DLSS takes three inputs: 1) a low-resolution, aliased image 2) motion vectors for the current frame, and 3) the high resolution previous frame. From those inputs, DLSS composes a beautifully sharp high-resolution image, to which post-processing and UI/HUD is then applied. You get the performance headroom you need to maximize ray tracing settings and increase output resolution.

At GTC 2021, Light Brick Studio demonstrated how stunning Unity games can look when real-time ray tracing and DLSS are combined. Watch their full talk for free here

Keep an eye out for more news about DLSS in Unity 2021.2 by subscribing to NVIDIA game development news and following the Unity Technologies Blog.