Categories
Misc

Tensorflow 2 not respecting thread settings

I am running a tensorflow application that sets inter_op,
intra_op and OMP_NUM_THREADS, however, it completely ignores these
settings and seems to run with the defaults. Here’s how I’m setting
them:

 import tensorflow as tf print('Using Thread Parallelism: {} NUM_INTRA_THREADS, {} NUM_INTER_THREADS, {} OMP_NUM_THREADS'.format(os.environ['NUM_INTRA_THREADS'], os.environ['NUM_INTER_THREADS'], os.environ['OMP_NUM_THREADS'])) session_conf = tf.compat.v1.ConfigProto(inter_op_parallelism_threads=int(os.environ['NUM_INTER_THREADS']), intra_op_parallelism_threads=int(os.environ['NUM_INTRA_THREADS'])) sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf) tf.compat.v1.keras.backend.set_session(sess) I have validated that it's reading the right values (the print prints the values as expected). I have also tried with other Tensorflow 2 versions with no success. I am at a loss as to what I'm doing wrong. Version Info: tensorflow 2.2.0 py37_2 intel tensorflow-base 2.2.0 0 intel tensorflow-estimator 2.2.0 pyh208ff02_0 keras 2.4.3 0 keras-base 2.4.3 py_0 keras-preprocessing 1.1.0 py_1 

submitted by /u/dunn_ditty

[visit reddit]

[comments]

Categories
Misc

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Brendon Cassidy, CTO and chief scientist at Super Hi-Fi, uses AI to give everyone the experience of a radio station tailored to their unique tastes. Super Hi-Fi, an AI startup and member of the NVIDIA Inception program, develops technology that produces smooth transitions, intersperses content meaningfully and adjusts volume and crossfade. Started three years ago, Read article >

The post Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound appeared first on The Official NVIDIA Blog.

Categories
Misc

I am stuck in defining the variables.

The code runs like this:

import tensorflow as tf import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D mnist_data = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist_data.load_data() def scale_mnist_data(train_images, test_images): return (train_images / 255, test_images / 255) def train_model(model, scaled_train_images, train_labels): scaled_train_images, scaled_test_images = scale_mnist_data(train_images, test_images) 

THE CODE RUNS PERFECTLY WELL AT THIS POINT,BUT HERE….

scaled_train_images = scaled_train_images[..., np.newaxis] scaled_test_images = scaled_test_images[..., np.newaxis] 

I GET THE ERROR- NameError: name ‘scaled_train_images’ is not
defined

NameError Traceback (most recent call last) <ipython-input-5-7e4c845d2449> in <module> 1 # Add a dummy channel dimension 2 ----> 3 scaled_train_images = scaled_train_images[..., np.newaxis] 4 scaled_test_images = scaled_test_images[..., np.newaxis] 

I wonder if inserting this code ” def train_model(model,
scaled_train_images, train_labels):” is fine. But here again, I
bumped into similar issues like history, frame and some other
variables being not able to be defined.

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

How to return polygon bound or 3d image segmentation for a detected object with TensorFlow Lite and MLKit?

(posted here for more advice)

I am making a project that utilizes MLKit. The classification
model will be a TensorFlow Lite model. I noticed that the detected
objects always return rectangular bounding boxes. I would like them
to return polygonal bounds that are shaped like the object it is
detecting, or if possible, a sort of “3D” bound.

I am aware of certain annotation tools, along with things like
Mask RCNN, but I am not sure how to integrate them into a
TensorFlow Lite model, & I do not know what specific files to
edit. (or if I am supposed to implement it in the model rather than
the base code) or if I can even do it at all.

I want the detected objects to return bounding polygons, or even
3D polygons/image segmentations, instead of bounding boxes, using
MLKit + TensorFlow Lite. How do I achieve this?

submitted by /u/0zeroBudget

[visit reddit]

[comments]

Categories
Offsites

The medical test paradox: Can redesigning Bayes rule help?

Categories
Misc

Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020

Much of 2020 may look best in the rearview mirror, but the year also held many moments of outstanding work, gems worth hitting the rewind button to see again. So, here’s a countdown — roughly in order of ascending popularity — of 10 favorite NVIDIA videos that hit YouTube in 2020. With two exceptions for Read article >

The post Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020 appeared first on The Official NVIDIA Blog.

Categories
Misc

a practiced eye for react.js and tensorflow

Does anyone have any insight about the contents of this
stackoverflow post:
https://stackoverflow.com/questions/65402617/tensorflow-automl-model-in-react

Getting a little desperate.

submitted by /u/eagletongue

[visit reddit]

[comments]

Categories
Misc

[Tutorial] How to Train Object Detector with TF Object Detection API

Object detection is a computer vision task that has recently
been influenced by all of the progress made in ML.

Now with tools like TensorFlow Object Detection API, you can
create reliable models quickly and fairly easily.

If you’re unfamiliar, TensorFlow Object Detection API: –
supports TensorFlow 2, – lets you employ state of the art model
architectures for object detection, – gives you a simple way to
configure models.

Tutorial shows everything from installation and setup, all the
way to model training.


TF object detection API tutorial

submitted by /u/kk_ai

[visit reddit]

[comments]

Categories
Misc

Transfer learning using a small dataset

I’m building an image classifier. I happen to have a small
dataset of ideal data. Can I train a model using this idealised
data, and somehow use it as a base for further training?

I’ve read through the docs; they all use ImageNet or
tensorflow-hub datasets. I can’t seem to find an example of using
your own data.

submitted by /u/BananaCharmer

[visit reddit]

[comments]

Categories
Misc

Is duplicating images which are good representations of the type of thing being classified a good idea?

Say you’re classifying the flowers dataset. Some images aren’t
as good as others. Would duplicating the images that are good
examples of a certain type help propagate the desired features in
the network?

E.g. if I duplicate a close-up of a certain type flower head
within the dataset (say a rose within /roses), would it make the
network more bias towards the duplicates?

I have a handful of ideal examples, and thousands of very
variable examples. I’m unsure what’s the best strategy to be more
biased towards the good examples in my data..

submitted by /u/BananaCharmer

[visit reddit]

[comments]