Categories
Misc

Experience Immersive Streaming with Omniverse XR Remote

View full-fidelity 3D models using AR mode in XR Remote.Creators and developers can now view their 3D content as it was meant to be experienced, in full immersive detail with NVIDIA Omniverse XR Remote for iPad.View full-fidelity 3D models using AR mode in XR Remote.

Content creators and developers can now view their 3D content in full-immersive detail with NVIDIA Omniverse XR Remote for iPad. The app is available now from the Apple Store for iPad iOS 14.5 and higher.

Visualizing complex 3D models is critical to industries, such as architecture and manufacturing, where context is everything. Minor design decisions can trigger changes that lead to higher costs and time-consuming adjustments.

A 3D model of a skyscraper sits on a desk, viewed using AR mode in XR Remote.
Figure 1. View full-fidelity 3D models using AR mode in XR Remote.

Omniverse XR Remote addresses this challenge by enabling users to interact with full-fidelity, real-time NVIDIA RTX ray-traced content in Omniverse. This can be streamed directly from a desktop to an iPad using NVIDIA CloudXR. Content is viewed in AR, where users bring virtual assets into their world, or as a VR virtual camera that gives users a “window” to navigate a 3D scene or experience. 

For developers, this provides a new means to distribute content built in Omniverse, without compromising on quality or mobility.

A highly detailed model of a kitchen is viewed using VR Virtual Camera mode in XR Remote.
Figure 2. Explore model details including full-fidelity textures and real-time NVIDIA RTX ray-traced lighting using VR Virtual Camera mode in XR Remote.

Streaming immersive design

Kohn Pedersen Fox Associates (KPF), one of the world’s preeminent architecture firms, is leveraging NVIDIA technologies to make the design process more intuitive for designers, engineers, and clients. 

“We see a future where we can bring our design to the table and the computer helps us make it real,” said Applied Research Director at KPF, Cobus Bothma. 

Bothma is using XR Remote to visualize at-scale architectural models overlaid with complex data sets—like diagrammatic flow lines of wind around a building, creating a virtual wind tunnel. Currently, KPF is working to simulate multiple buildings in the same scene, viewed on a tablet device.

Diagrammatic flow lines of windflow overlaid on a 3D building model using Omniverse and viewed in XR Remote.
Figure 3. Diagrammatic flow lines of windflow overlaid on a 3D building model using Omniverse and viewed in XR Remote. Image provided by Kohn Pedersen Fox Associates.

“With XR Remote, we can reduce review cycles from days to hours,” said Bothma. 

Historically, it would have taken the KPF design team 4 to 6 weeks to develop a custom app for every design change. “Now we can simply stream it to them and they will immediately have the latest view,” Bothma said.

The application delivers an immersive view of Universal Scene Description content from Omniverse to any supported iOS or Android device through the use of the NVIDIA CloudXR streaming solution. XR Remote is one of the first instances where users can leverage CloudXR streaming to reach back through a remote agent and harness extra compute power. This helps users run a simulation in Omniverse and stream full-fidelity graphics to an iPad in real time.

The result is a fully immersive interaction with 3D content, which enables easier collaboration to speed up design processes. “This is a much more intuitive way to interact with 3D content than a mouse and keyboard,” said Greg Jones, Director of Global Business Development and Product Management for XR at NVIDIA.

“With XR Remote, users can grab the iPad and literally walk through their data. This changes the game for industries like AEC, manufacturing, and M&E, where flat digital tools have required designers to translate 2D renderings into 3D results,” Jones said. 

Getting started with Omniverse XR Remote

The Omniverse XR Remote application is available now through the Apple Store and Android devices.

Requirements for using XR Remote on an iPad:

  • An iPad with iOS 14.5 or higher.
  • Omniverse XR Remote application, installed on an iPad.
  • The latest version of Omniverse Create, installed on an NVIDIA RTX-enabled PC (Windows and Linux compliant) or VM.
  • Both PC and iPad must be connected to the network.

Load a 3D model and enable AR settings in Omniverse Create on a PC to get started. Then input the corresponding IP address into the Omniverse XR Remote on their iPad. Check out the XR Remote documentation for detailed instructions.

Android tablet users can follow these steps and connect a device using NVIDIA Omniverse XR Remote.

Expand the design process and view 3D content as it was meant to be experienced, in full immersive detail. Download NVIDIA Omniverse XR Remote for iPad today from the Apple Store.

Categories
Misc

Forrester Report: ‘NVIDIA GPUs Are Synonymous With AI Infrastructure’

In an evaluation of enterprise AI infrastructure providers, Forrester Research Monday recognized NVIDIA as a leader in AI infrastructure. The “Forrester Wave™: AI Infrastructure, Q4 2021” report states that “​​NVIDIA’s DNA is in every other AI infrastructure solution we evaluated. It’s an understatement to say that NVIDIA GPUs are synonymous with AI infrastructure.” “Reference customers Read article >

The post Forrester Report: ‘NVIDIA GPUs Are Synonymous With AI Infrastructure’ appeared first on The Official NVIDIA Blog.

Categories
Misc

Blender 3.0 Release Accelerated by NVIDIA RTX GPUs, Adds USD Support for Omniverse

‘Tis the season for all content creators, especially 3D artists, this month on NVIDIA Studio. Blender, the world’s most popular open-source 3D creative application, launched a highly anticipated 3.0 release, delivering extraordinary performance gains powered by NVIDIA RTX GPUs, with added Universal Scene Description (USD) support for NVIDIA Omniverse. Faster 3D creative workflows are made Read article >

The post Blender 3.0 Release Accelerated by NVIDIA RTX GPUs, Adds USD Support for Omniverse appeared first on The Official NVIDIA Blog.

Categories
Misc

Useful data summary statistics with image classification

Hello!

I am doing image classification with TensorFlow for learning purposes. I am splitting the data into 5 folds. I would like to get useful summary statistics on these validation sets. What could be useful other than the shape of the validation sets?

submitted by /u/The_Poor_Jew
[visit reddit] [comments]

Categories
Misc

Advent of Code 2021 in pure TensorFlow – day 2. The limitations of Python enums and type annotations in TensorFlow programs

Advent of Code 2021 in pure TensorFlow - day 2. The limitations of Python enums and type annotations in TensorFlow programs submitted by /u/pgaleone
[visit reddit] [comments]
Categories
Misc

Using AMD Radeon with TF in Anaconda Spyder

Hello,

I understand that Tensorflow is geared towards proprietary NVIDIA Cuda, but is there a workaround for AMD Radeon GPU? I’m on a Macbook Pro with an AMD Radeon 580 external GPU card.

submitted by /u/ZThrock
[visit reddit] [comments]

Categories
Misc

exit code 409 when trying to run through tensorflow example in pycharm

I am trying to go through the DCGAN example on the tensorflow website https://www.tensorflow.org/tutorials/generative/dcgan. it seems to run fine up until the step where it uses the generator generated_image = generator(noise, training=False). At that point it exits with error code Process finished with exit code -1073740791 (0xC0000409).

I am running on Windows 10 using pycharm. I have tried messing with the batch size in case this is a memory issue, but even setting it to 1 gives the same results. I have also tried running pycharm as administrator.

submitted by /u/skywo1f
[visit reddit] [comments]

Categories
Misc

Tensorflow Lite Segmentation Fault

I am running Tensorflow Lite on my Raspberry Pi 3b+ with a custom object detection mode. I have tested it on a Google COCO dataset and it works wonderfully but when I test it on my custom trained model it does not work despite the model passing TfLite Model Maker evaluation. When I run it the only error I get in my message is “Segmentation fault”. How can I fix this?

I am not able to upload my model to Stackoverflow but just some info about it. It is only detecting one object, It is Not quantized, it is trained based off the efficientdet_lite1 mode, and I trained it using the official Tensorflow Lite Model Maker Google Colab.

Here is the code used to interpret the model on my Pi.

https://pastebin.com/1at3ZAJd

I added a few print statements aswell to troubleshoot and it stops executing at around line 115.

Does anyone know how to fix this?

submitted by /u/MattDlr4
[visit reddit] [comments]

Categories
Misc

Advent of Code 2021 in pure TensorFlow – day 1

Advent of Code 2021 in pure TensorFlow - day 1 submitted by /u/pgaleone
[visit reddit] [comments]
Categories
Misc

The half_pixel_centers keyword seems to not exist in tf.image.resize in tf2.0 or tf.27

With the robotics set up we use opencv for the images. However in the cnn I use tf.io.decode_jpg to open the images. These two methods slighty alter the image and makes it such that the same image, but opened in the different methods can’t be classified on the cnn.

I found the differences in this blog: https://towardsdatascience.com/image-read-and-resize-with-opencv-tensorflow-and-pil-3e0f29b992be

which states that two things needs to be changed to ensure that the two files are the same

  1. dct_method=’INTEGER_ACCURATE’ needs to be added to the decode
  2. half_pixel_centers=True to the resize method and also force it to be bilinear.

However the half_pixel_centers keyword is not found

https://stackoverflow.com/questions/50591669/tf-image-resize-bilinear-vs-cv2-resize

This stackoverflow states that it is added in tf2.0 with a link to their github showing it has indeed been added: https://github.com/tensorflow/tensorflow/commit/3ae2c6691b7c6e0986d97b150c9283e5cc52c15f

About my code I map the dataset to a function that reads the file path

img = tf.io.read_file(file_path)

img = tf.io.decode_jpeg(img, channels=3, dct_method=’INTEGER_ACCURATE’)

resized img = tf.image.resize(img, (28,28), method=tf.image.ResizeMethod.BILINEAR, preserve_aspect_ratio=False, antialias=False, name=None, half_pixel_center=True)

I also tried it on an other machine with tf2.7 and it gives the same error.Could someone point out what I am doing wrong or perhaps there is a better way in general?

submitted by /u/Calond
[visit reddit] [comments]