Categories
Misc

MacOS: ModuleNotFoundError: No module named ‘object_detection’

!python {‘/content/generate_tfrecord.py’} -x
{‘/content/Training’} -l {‘/content/label_map.pbtxt’} -o
{ANNOTATION_PATH + ‘/train.record’} !python
{‘/content/generate_tfrecord.py’} -x{‘/content/Testing’} -l
{‘/content/label_map.pbtxt’} -o {ANNOTATION_PATH +
‘/test.record’}

running which I get error:

Traceback (most recent call last): File
“/content/generate_tfrecord.py”, line 29, in <module> from
object_detection.utils import dataset_util, label_map_util
ModuleNotFoundError: No module named ‘object_detection’ Traceback
(most recent call last): File “/content/generate_tfrecord.py”, line
29, in <module> from object_detection.utils import
dataset_util, label_map_util ModuleNotFoundError: No module named
‘object_detection’

MacOS Catalina 10.15.2, Tensorflow (latest version)

I have already installed all dependencies through pip.
(object-detection api, exported the path in terminal, ran the
command “python setup.py install in the same path)

Can someone please help?

submitted by /u/Ghostly_Beast

[visit reddit]

[comments]

Categories
Misc

Recommendations for new users

I’m sorry if this has been asked before, or if it’s obvious
but:

I’m trying to make a deep learning model that can recommend
items to users based on the rating that they’ve given other
items.

And I kind of understand how to do this.

But now comes the part that confuses me, let’s say I deploy this
model on my website. But then an existing user rates some new items
or what if it’s an entirely new user that is not known by the
model? Do I then need to retrain my entire model?

Or is there some way to make a recommender model that can make
recommendations for users without retraining the entire model
again?

I’ve tried googling this, but I can’t seem to find an answer
anywhere(or I’m not searching for the right words)

Anyone have a suggestion that can push me in the right
direction?

Thanks in advance!

submitted by /u/EntrepreneurAmazing4

[visit reddit]

[comments]

Categories
Misc

AI, Computational Advances Ring In New Era for Healthcare

We’re at a pivotal moment to unlock a new, AI-accelerated era of discovery and medicine, says Kimberly Powell, NVIDIA’s vice president of healthcare. Speaking today at the J.P. Morgan Healthcare conference, held virtually, Powell outlined how AI and accelerated computing are enabling scientists to take advantage of the boom in biomedical data to power faster Read article >

The post AI, Computational Advances Ring In New Era for Healthcare appeared first on The Official NVIDIA Blog.

Categories
Misc

New Video: Light Resampling In Practice with RTXDI

In this video, NVIDIA’s Alexey Panteleev explains the key details needed to add performant resampling to modern game engines. He also discusses roadmap plans for the recently announced RTXDI SDK.

In this video, NVIDIA’s Alexey Panteleev explains the key details needed to add performant resampling to modern game engines. He also discusses roadmap plans for the recently announced RTXDI SDK, which allows easy experimentation and integration of direct illumination. 

With RTXDI, lighting artists can render scenes with millions of dynamic area lights in real-time without complex computational overheads or disruptive changes to the artist’s workflow. “RTXDI will let game developers use any meshes or primitive lights as key lights, which can cast dynamic raytraced shadows,” said Panteleev. 

This is a segment from a two-part video available on GTC on Demand, entitled “Rendering Game With Millions of Ray Traced Lights”. We encourage you to check out the remainder of the talk, in which NVIDIA’s Chris Wyman explains why now is the time to move from rasterization techniques to real-time ray tracing for game development. 

To learn more about NVIDIA’s real-time ray tracing technologies, as well as other great tools and SDKs for game developers, please visit https://developer.nvidia.com/industries/gamedev.

Categories
Misc

Object detection in tensorflow 2.X

I created Kerod a pure tensorflow 2
implementation of object detection algorithms (Faster R-CNN, DeTr)
aiming production. It stands for Keras Object Detection.

It aims to build a clear, reusable, tested, simple and
documented codebase for tensorflow 2.X.

You’ll be able to train models on COCO, Pascal VOC just by
launching the notebooks.

Here a link of the project: https://github.com/EmGarr/kerod.

Hope it will helps!

submitted by /u/Em_Garr

[visit reddit]

[comments]

Categories
Misc

Stream from the Cloud: NVIDIA CloudXR Release 2.0 Now Available

With NVIDIA CloudXR, users don’t need to be physically tethered to a high-performance computer to drive rich, immersive environments.

NVIDIA CloudXR Release 2.0 is now available. With NVIDIA CloudXR, users don’t need to be physically tethered to a high-performance computer to drive rich, immersive environments. The CloudXR SDK runs on NVIDIA servers located in the cloud, edge or on-premises, delivering the advanced graphics performance needed for wireless virtual, augmented or mixed reality environments — collectively known as XR.

This latest release includes new features that allow more client support for various devices, including Oculus Quest 2, HoloLens 2 Display, for AR streaming. And the new latency profiler helps providers and developers understand CloudXR latencies and performance.

Additional features include:

  • Oculus Quest 2 Client support 
  • Oculus Quest and Oculus Quest 2 Link Support with Windows client
  • HoloLens 2 display support with hand gestures mapped to controller buttons for interaction
  • Foveated Scaling, which allows improved visual quality optimized to match HMD optics
  • CloudXR Latency Profiler, which illustrates various latencies associated with the CloudXR streaming pipeline 
  • Android client ARM-64-v8 support for both AR and VR client
  • Configurable log file size and removal
  • Various bug fixes

Companies that have access to 5G networks can use NVIDIA CloudXR to stream immersive environments from their on-prem data centers. Telcos, software makers and device manufacturers can use the high bandwidth and low latency of 5G signals to provide high framerate, low-latency immersive XR experiences to millions of customers in more locations than previously possible.

“Using NVIDIA CloudXR and NVIDIA vGPU we have built a number of on-prem, high-performance solutions for customers who require the highest quality streaming XR,” said Andy Bowker, co-founder and CEO at The Grid Factory. “The NVIDIA CloudXR SDK has become a lynchpin for our end-to-end solutions, allowing us to deliver the highest quality turnkey solutions, and allowing our customers in turn to focus on their work instead of worrying about the technology.”

“NVIDIA CloudXR is an exciting new technology that’s becoming available on all platforms and devices,” said Greg Demchak, Director of the iLAB at Bentley. “With CloudXR, our customers will be able to extend their iTwin (Digital Twin) solutions to AR, VR, and MR devices.”

NVIDIA CloudXR Release 2.0 is now available for early access users. If you are interested in applying for access to this private beta, apply at the NVIDIA CloudXR DevZone Page.

To learn more about The Grid Factory and their CloudXR deployments, sign up for the upcoming webinar taking place on February 18. 

Categories
Misc

Long-Term Stock Forecasting

Categories
Offsites

Long-Term Stock Forecasting

Categories
Misc

Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering

The journey to making the upcoming film Gods of Mars changed course dramatically once real-time rendering entered the picture. The movie, currently in production, features a mix of cinematic visual effects with live-action elements. The film crew had planned to make the movie primarily using real-life miniature figures. But they switched gears last year once Read article >

The post Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering appeared first on The Official NVIDIA Blog.

Categories
Misc

Freeze the Day: How UCSF Researchers Clear Up Cryo-EM Images with GPUs

When photographers take long-exposure photos, they maximize the amount of light their camera sensors receive. The technique helps capture scenes like the night sky, but it introduces blurring in the final image, as in the example at right. It’s not too different from cryo-electron microscopy, or cryo-EM, which scientists use to study the structure of Read article >

The post Freeze the Day: How UCSF Researchers Clear Up Cryo-EM Images with GPUs appeared first on The Official NVIDIA Blog.