Categories
Misc

Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects

Matanya Horowitz smelled a problem in 2014. Fresh out of CalTech with a Ph.D., he saw that recycling centers lacked robotics and computer vision to pick through heaps of garbage-contaminated recyclables. Horowitz founded AMP Robotics that year to harness AI run on NVIDIA GPUs to turn sorting out the trash into cash. It’s a ripe Read article >

The post Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects appeared first on The Official NVIDIA Blog.

Categories
Misc

Any idea on how to fix this error I’m receiving?

I downloaded a software that takes images of people, and creates 3d models. I’m having an issue where the encodings fail, and I’m left with the message ‘tf.ConfigProto() AttributeError: module ‘tensorflow’ has no attribute ‘ConfigProto’

I have ZERO experience working with code/python so I’m utterly confused. I can post the full text if necessary. Been trying to fix this for hours

submitted by /u/r_hove
[visit reddit] [comments]

Categories
Misc

Loading fashion mnist test data only

I’m working in a memory constrained environment and I’m trying to optimize memory usage as much as I can. Can I load train data alone or test data alone in a jupyter notebook?

submitted by /u/aaqi2
[visit reddit] [comments]

Categories
Misc

Why does my custom cosine similarity loss lead to NaNs when it is equivalent and largely identical to Keras’ implementation?

I need to implement CosineSimilarity myself because i need to work on the individual losses before calculating the batch-wide mean.

I do it like this:  

 

 

a_n = tf.math.l2_normalize(a, axis=-1) 

 

b_n = tf.math.l2_normalize(b, axis=-1) 

 

d = -tf.math.reduce_sum(a_n * b_n, axis=-1) 

 

# Above is _identical_ to Keras' implementation. 

 

return d, tf.math.reduce_mean(d) 

 

I already compared the output to Keras’ implementation by repeatedly printing

 print(tf.math.reduce_sum(tf.math.abs(my_loss - keras_loss))) 

However, even though this outputs straight zeros (and never any NaNs), i still encounter NaNs, while with Keras’ implementation i do not. I already tried a higher epsilon in the l2_normalize, or using multiply_no_nan, to no avail.

Update: This comment.

submitted by /u/tfhwchoice
[visit reddit] [comments]

Categories
Misc

What can you do with the confidence score of a detection?

Edit: Sorry, should have read the rules first. Mods, if you take this down because its not tensorflow specific, I understand.

I’m just starting to play with neural networks, object detection, and tracking. I’m wondering what people use the confidence score of a detection for. Are there any common uses beyond simple confidence thresholding (i.e. output detection if conf > 0.5, otherwise dont)? Papers that use the confidence value in interesting ways are welcome!

For my own project, I was wondering how I might use the confidence score in the context of object tracking. For fun, and because its a super common application, i’ve been playing around with a traffic sign detector, and deploying it in a simulation. In the simulation, I get consistent and accurate predictions for real signs, and then frequent but short lived (i.e. 1-3 frame lifetime) false positives. I was thinking I could do some sort of tracking that uses the confidence values over a series of predictions to compute some kind of detection probability. I.e. if i look at a series of 30 frames, and in 20 i have 0.3 confidence of a detection, where the bounding boxes all belong to the same tracked object, then I’d argue there is more evidence that an object is there than if I look at a series of 30 frames, and have 2 detections that belong to a single object, but with a higher confidence e.g. conf=0.6. How can I leverage the confidence scores to create a more robust detection and tracking pipeline? Or am I already way off base (i’ve been trying to come up with a formula for how to do it, but probability and stochastics were never my strong suit and I know that the formulas I’ve been trying to write down implicitly assume independence, which I don’t know if that is the case here)?

Any way, how do you use the confidence values in your own projects?

submitted by /u/ItsAnApe
[visit reddit] [comments]

Categories
Misc

Converting tensorflow model and checkpoint to onnx.

I am trying to convert a pretrained model (Efficientnet) which I have trained on some custom images and new labels. But when using tf2onnx to convert it to onnx format it requires a checkpoint.meta file? But I can’t see this file anywhere? I only see a .index and .data file from the model when I have trained it.

submitted by /u/uebyte
[visit reddit] [comments]

Categories
Misc

[Video] Running TensorFlow Lite Models on Raspberry Pi

Many deep learning models created using TensorFlow require high processing capabilities to perform inference. Fortunately, there is a lite version of TensorFlow called TensorFlow Lite (TFLite for short) which allows these models to run on devices with limited capabilities. Inference is performed in less than a second.

This tutorial will go through how to prepare Raspberry Pi (RPi) to run a TFLite model for classifying images. After that, the TFLite version of the MobileNet model will be downloaded and used for making predictions on-device.

Tutorial video link: https://youtu.be/FdfxizUUQJI

Run the code on a free GPU: https://console.paperspace.com/ml-showcase/notebook/rljtgo7aadmiq7q?file=Raspberry%20Pi%20TF%20Lite%20Models.ipynb

submitted by /u/hellopaperspace
[visit reddit] [comments]

Categories
Misc

Support creation of tf.data.Dataset (data generator) and augmentation for image.

This package makes it easy for us to create efficient image Dataset generator.

github link

Supported Augmentations

  • standardize
  • resize
  • random_rotation
  • random_flip_left_right
  • random_flip_up_down
  • random_shift
  • random_zoom
  • random_shear
  • random_brightness
  • random_saturation
  • random_hue
  • random_contrast
  • random_crop
  • random_noise

submitted by /u/last_peng
[visit reddit] [comments]

Categories
Misc

Lunar Has It: Broadcasting Studio Uses NVIDIA Omniverse to Create Stunning Space Documentary

Audiences are making a round-trip to the moon with a science documentary that showcases China’s recent lunar explorations. Fly to the Moon, a series produced by the China Media Group (CMG) entirely in NVIDIA Omniverse, details the history of China’s space missions and shares some of the best highlights of the Chang ‘e-4 lunar lander, Read article >

The post Lunar Has It: Broadcasting Studio Uses NVIDIA Omniverse to Create Stunning Space Documentary appeared first on The Official NVIDIA Blog.

Categories
Misc

To Infinity, and Beyond: Ohio State University Builds AV Cybersecurity Platform for Long-Term Research on NVIDIA DRIVE

Researchers at The Ohio State University are aiming to take autonomous driving to the limit. Autonomous vehicles require extensive development and testing for safe widespread deployment. A team at The Ohio State Center for Automotive Research (CAR) is building a Mobility Cyber Range (MCR) — a dedicated platform for cybersecurity testing — in a self-driving Read article >

The post To Infinity, and Beyond: Ohio State University Builds AV Cybersecurity Platform for Long-Term Research on NVIDIA DRIVE appeared first on The Official NVIDIA Blog.