Categories
Misc

Tensorflow on ARM Devices

Hey everyone,

I’m using a Surface Pro X and wanted to get a little bit into
Deep Learning and neural networks, and wanted to use Tensorflow. Is
it possible/How is it possible to install Tensorflow on ARM
devices? Seems to me like the first big hurdle is the one that I
can’t install Python as a 64-bit-version. Should I maybe use an
emulation? Thanks for any help!

submitted by /u/Hot-Ad-3651

[visit reddit]

[comments]

Categories
Misc

Export a model for inference.

Hi, All,

​

I have written a script to export a pre-trained TensorFlow model
for inference. The inference code is for the code present at this
directory –https://github.com/sabarim/itis.

I took a reference from the Deeplab export_model.py script to
write a similar one for this model.

Reference script link:
https://github.com/tensorflow/models/blob/master/research/deeplab/export_model.py

My script:

https://projectcode1.s3-us-west-1.amazonaws.com/export_model.py

​

I am getting an error, when I try to run inference from the
saved model.

FailedPreconditionError: 2 root error(s) found.

(0) Failed precondition: Attempting to use uninitialized value
decoder/feature_projection0/BatchNorm/moving_variance [[{{node
decoder/feature_projection0/BatchNorm/moving_variance/read}}]]
[[SemanticPredictions/_13]] (1) Failed precondition: Attempting to
use uninitialized value
decoder/feature_projection0/BatchNorm/moving_variance [[{{node
decoder/feature_projection0/BatchNorm/moving_variance/read}}]] 0
successful operations. 0 derived errors ignored.

​

Could anyone please take a look and help me understand the
problem.

submitted by /u/DamanpKaur

[visit reddit]

[comments]

Categories
Misc

Tensorboard: Should I be using the smoothed or normal value for evaluating accuracy?

Hi Everyone,

Pretty much in the title. I’m pretty sure that the smoothed
values are some sort of exponential moving average.

When evaluating the accuracy of the model (say, the accuracy I
want to tell people my model can achieve on the validation set, for
some nth epoch), should I be using the smoothed value or the normal
value? I take the accuracy every epoch.

Of course, this is before the ultimate test on the test set, but
before doing that, to kind of figure out what my max accuracy is,
and to gauge if the hyperparamaters i’m choosing are working,
should I be going by the smoothed or not smoothed values?

An example:

On step no. 151 (epoch 8)

smoothed accuracy (with smooth = 0.6) is 36.25%

“real” accuracy is 42.86%

​

Is my actual accuracy 36.25% or 42.86%? i

Thanks!

A

submitted by /u/kirbyburgers

[visit reddit]

[comments]

Categories
Misc

How to get the equation that a multiple linear regression model is using in Keras w/ Tensorflow?

I have the weights and biases for both the normalizer and the
Dense layer in my model, but I am unsure how to convert these
values into 1 equation that the computer is using to predict
values, which I would like to know. The model takes 2 independent
values and predicts 1 value, so using the weights and biases below,
how I could formulate an equation:

weights for normalizer layer: [ 8.89 11.5 ]

biases for normalizer layer: [321.69 357.53]

(not even sure if the normalizer biases and weights matter as it
is part of preprocessing)

weights for Dense layer: [[ 0.08] [19.3 ]]

biases for Dense layer: [11.54]

Thank you very much and I would greatly appreciate any help!
πŸ™‚

submitted by /u/HexadecimalHero

[visit reddit]

[comments]

Categories
Misc

Does my model make sense? It’s looking thicc but I don’t know

I’ve built my first model and I’ve not very experienced so I’m
unsure if it’s structured correctly.

I have the VGG16 model on top (frozen) and I connect this to a
dene layer that I train on categorical data (6 classes)

_________________________________________________________________ Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 150, 150, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 150, 150, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 75, 75, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 75, 75, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 75, 75, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 37, 37, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 37, 37, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 18, 18, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 9, 9, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 128) 1048704 _________________________________________________________________ dense_1 (Dense) (None, 6) 774 ================================================================= Total params: 15,764,166 Trainable params: 1,049,478 Non-trainable params: 14,714,688 _________________________________________________________________ 

I want to apply what the model has learnt thus far to a binary
classification problem. So, once trained on my categorical data, I
freeze `dense` and remove `dense_1`, then I add in `dense_2`,
`dense_3`, `dense_4` (the latter having 1 output).

continued from before.... block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 128) 1048704 _________________________________________________________________ dense_2 (Dense) (None, 128) 16512 _________________________________________________________________ dense_3 (Dense) (None, 128) 16512 _________________________________________________________________ dense_4 (Dense) (None, 1) 129 ================================================================= Total params: 15,796,545 Trainable params: 33,153 Non-trainable params: 15,763,392 

Then I train it on my binary data (I have setup augmentation and
preprocessing, etc.)

Does this network make sense though? I don’t have the deep
understanding many people here do, so not really sure. Any input
would be appreciated.

submitted by /u/BananaCharmer

[visit reddit]

[comments]

Categories
Misc

Most tutorials seem outdated

I’ve been learning machine learning from uni, but I haven’t done
as much practical stuff as I’d like so I decided to do some in the
holidays.

Most of the books I’ve looked at (Deep learning pipeline). These
are pretty recent (2018ish) but mostly seem to either feature
tensorflow 1, need a previous version of keras to be compatible,
etc etc. Things like the Mnist dataset are also in different forms
across different versions.

For tensorflow I’ve been just using

tf.compat.v1.function() 

To just keep compatibility with tensorflow 1 so I can follow
along with the examples better, but should I just try to find
something more recent than 2018?

One of the tutorials also wanted me to run all code on an ubuntu
google cloud machine?

Are there any super good tensorflow books that are up to date
that you’d recommend? I’ve literally just been searching for deep
learning at the university online library.

It seems kinda dumb that the way the framework operates changes
so much in such a short period of time. I’m willing to put time in,
but I don’t want to go through a 500 page book to realize that
everything is now obsolete. Also how the hell do people working in
the industry deal with this, when half of the code they’ve written
is now not compatible with the main version.

submitted by /u/eht_amgine_enihcam

[visit reddit]

[comments]

Categories
Misc

Any pre-trained TensorFlow models on speech/voice data?

Hi All,

I have been looking for TensorFlow models pre-trained on speech
data, preferably in js/python. That I can use to extract embeddings
for streaming/recorded audio up to 1 min long.

I intend to use the embeddings as an input to my machine
learning pipeline.

So far, I have found only this:


https://github.com/tensorflow/tfjs-models/tree/master/speech-commands

This is trained to classify 20 voice commands. So, I feel the
embeddings from this model may not have sufficient discriminative
power to identify, let’s say – phonemes, 1000 words each from
English, French and a few other popular languages.

I am not worried about embedding->word mapping. At the
current stage, I am happy to use the embeddings to evaluate
similarity score of two different sound samples. E.g. I am not
worried about resolving confusion between – ‘red’ and ‘read(past
tense)’. In fact – ‘I read a red book’ ‘Eye red a read buk’ should
result to 95+% match.

Any hints/redirection are also greatly appreciated. Perhaps
there are simpler ways to achieve the same.

submitted by /u/akshayxyz

[visit reddit]

[comments]

Categories
Misc

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

Penn State University pals Brad Bogolea and Mirza Shah were living in Silicon Valley when they pitched Jeff Gee on their robotics concepts. Fortunately for them, the star designer was working at the soon-to-shutter Willow Garage robotics lab. So the three of them β€” Shah was also a software engineer at Willow β€” joined together Read article >

The post AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue appeared first on The Official NVIDIA Blog.

Categories
Misc

(Windows) TensorFlow not detecting the cudart64_110.dll file

Yesterday, I installed the latest CUDA toolkit (11.2), but
TensorFlow said there was no cudart64_110.dll file. So, I then
installed CUDA toolkit 11.0, which has this file, but TensorFlow
still cannot find the file.

I am running Windows 10 Home Edition.

submitted by /u/Comprehensive-Ad3963

[visit reddit]

[comments]

Categories
Misc

How to write a code that can compute and display the loss and accuracy of the trained model on the test set?

I’m rather quite embarrassed recently for flooding this forum
thread with mostly novice questions. I’m still a newbie, still
struggling to figure out how the code works in TensorFlow. Pardon
me for doing that. Is there any template code where I can compute
and display the loss and accuracy of the trained model on the test
set?

submitted by /u/edmondoh001

[visit reddit]

[comments]