Categories
Misc

Tensorflow on ARM Devices

Hey everyone,

I’m using a Surface Pro X and wanted to get a little bit into
Deep Learning and neural networks, and wanted to use Tensorflow. Is
it possible/How is it possible to install Tensorflow on ARM
devices? Seems to me like the first big hurdle is the one that I
can’t install Python as a 64-bit-version. Should I maybe use an
emulation? Thanks for any help!

submitted by /u/Hot-Ad-3651

[visit reddit]

[comments]

Categories
Misc

Tensorboard: Should I be using the smoothed or normal value for evaluating accuracy?

Hi Everyone,

Pretty much in the title. I’m pretty sure that the smoothed
values are some sort of exponential moving average.

When evaluating the accuracy of the model (say, the accuracy I
want to tell people my model can achieve on the validation set, for
some nth epoch), should I be using the smoothed value or the normal
value? I take the accuracy every epoch.

Of course, this is before the ultimate test on the test set, but
before doing that, to kind of figure out what my max accuracy is,
and to gauge if the hyperparamaters i’m choosing are working,
should I be going by the smoothed or not smoothed values?

An example:

On step no. 151 (epoch 8)

smoothed accuracy (with smooth = 0.6) is 36.25%

“real” accuracy is 42.86%

​

Is my actual accuracy 36.25% or 42.86%? i

Thanks!

A

submitted by /u/kirbyburgers

[visit reddit]

[comments]

Categories
Misc

Does my model make sense? It’s looking thicc but I don’t know

I’ve built my first model and I’ve not very experienced so I’m
unsure if it’s structured correctly.

I have the VGG16 model on top (frozen) and I connect this to a
dene layer that I train on categorical data (6 classes)

_________________________________________________________________ Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 150, 150, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 150, 150, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 75, 75, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 75, 75, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 75, 75, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 37, 37, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 37, 37, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 18, 18, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 9, 9, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 128) 1048704 _________________________________________________________________ dense_1 (Dense) (None, 6) 774 ================================================================= Total params: 15,764,166 Trainable params: 1,049,478 Non-trainable params: 14,714,688 _________________________________________________________________ 

I want to apply what the model has learnt thus far to a binary
classification problem. So, once trained on my categorical data, I
freeze `dense` and remove `dense_1`, then I add in `dense_2`,
`dense_3`, `dense_4` (the latter having 1 output).

continued from before.... block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 128) 1048704 _________________________________________________________________ dense_2 (Dense) (None, 128) 16512 _________________________________________________________________ dense_3 (Dense) (None, 128) 16512 _________________________________________________________________ dense_4 (Dense) (None, 1) 129 ================================================================= Total params: 15,796,545 Trainable params: 33,153 Non-trainable params: 15,763,392 

Then I train it on my binary data (I have setup augmentation and
preprocessing, etc.)

Does this network make sense though? I don’t have the deep
understanding many people here do, so not really sure. Any input
would be appreciated.

submitted by /u/BananaCharmer

[visit reddit]

[comments]

Categories
Misc

How to get the equation that a multiple linear regression model is using in Keras w/ Tensorflow?

I have the weights and biases for both the normalizer and the
Dense layer in my model, but I am unsure how to convert these
values into 1 equation that the computer is using to predict
values, which I would like to know. The model takes 2 independent
values and predicts 1 value, so using the weights and biases below,
how I could formulate an equation:

weights for normalizer layer: [ 8.89 11.5 ]

biases for normalizer layer: [321.69 357.53]

(not even sure if the normalizer biases and weights matter as it
is part of preprocessing)

weights for Dense layer: [[ 0.08] [19.3 ]]

biases for Dense layer: [11.54]

Thank you very much and I would greatly appreciate any help!
🙂

submitted by /u/HexadecimalHero

[visit reddit]

[comments]

Categories
Misc

Any pre-trained TensorFlow models on speech/voice data?

Hi All,

I have been looking for TensorFlow models pre-trained on speech
data, preferably in js/python. That I can use to extract embeddings
for streaming/recorded audio up to 1 min long.

I intend to use the embeddings as an input to my machine
learning pipeline.

So far, I have found only this:


https://github.com/tensorflow/tfjs-models/tree/master/speech-commands

This is trained to classify 20 voice commands. So, I feel the
embeddings from this model may not have sufficient discriminative
power to identify, let’s say – phonemes, 1000 words each from
English, French and a few other popular languages.

I am not worried about embedding->word mapping. At the
current stage, I am happy to use the embeddings to evaluate
similarity score of two different sound samples. E.g. I am not
worried about resolving confusion between – ‘red’ and ‘read(past
tense)’. In fact – ‘I read a red book’ ‘Eye red a read buk’ should
result to 95+% match.

Any hints/redirection are also greatly appreciated. Perhaps
there are simpler ways to achieve the same.

submitted by /u/akshayxyz

[visit reddit]

[comments]

Categories
Misc

Most tutorials seem outdated

I’ve been learning machine learning from uni, but I haven’t done
as much practical stuff as I’d like so I decided to do some in the
holidays.

Most of the books I’ve looked at (Deep learning pipeline). These
are pretty recent (2018ish) but mostly seem to either feature
tensorflow 1, need a previous version of keras to be compatible,
etc etc. Things like the Mnist dataset are also in different forms
across different versions.

For tensorflow I’ve been just using

tf.compat.v1.function() 

To just keep compatibility with tensorflow 1 so I can follow
along with the examples better, but should I just try to find
something more recent than 2018?

One of the tutorials also wanted me to run all code on an ubuntu
google cloud machine?

Are there any super good tensorflow books that are up to date
that you’d recommend? I’ve literally just been searching for deep
learning at the university online library.

It seems kinda dumb that the way the framework operates changes
so much in such a short period of time. I’m willing to put time in,
but I don’t want to go through a 500 page book to realize that
everything is now obsolete. Also how the hell do people working in
the industry deal with this, when half of the code they’ve written
is now not compatible with the main version.

submitted by /u/eht_amgine_enihcam

[visit reddit]

[comments]

Categories
Misc

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

Penn State University pals Brad Bogolea and Mirza Shah were living in Silicon Valley when they pitched Jeff Gee on their robotics concepts. Fortunately for them, the star designer was working at the soon-to-shutter Willow Garage robotics lab. So the three of them — Shah was also a software engineer at Willow — joined together Read article >

The post AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue appeared first on The Official NVIDIA Blog.

Categories
Misc

How to write a code that can compute and display the loss and accuracy of the trained model on the test set?

I’m rather quite embarrassed recently for flooding this forum
thread with mostly novice questions. I’m still a newbie, still
struggling to figure out how the code works in TensorFlow. Pardon
me for doing that. Is there any template code where I can compute
and display the loss and accuracy of the trained model on the test
set?

submitted by /u/edmondoh001

[visit reddit]

[comments]

Categories
Misc

(Windows) TensorFlow not detecting the cudart64_110.dll file

Yesterday, I installed the latest CUDA toolkit (11.2), but
TensorFlow said there was no cudart64_110.dll file. So, I then
installed CUDA toolkit 11.0, which has this file, but TensorFlow
still cannot find the file.

I am running Windows 10 Home Edition.

submitted by /u/Comprehensive-Ad3963

[visit reddit]

[comments]

Categories
Misc

My AI model doesn’t provide me with ‘accuracy’, it always say its 0. Why is that?

My AI model doesn’t provide me with ‘accuracy’, it always say
its 0. Why is that?

my code:

(the data i use to train is just a list of a few thousand high
prices of BTC)

# split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in tqdm(range(len(sequence)), desc=’Creating training
array’):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence/,010,0,
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
def create_training_sequence(price_type=’high’, n_steps=5):
df = pd.read_csv(‘candlesticks.csv’)
df = df.drop([‘Unnamed: 0’, ‘open’, ‘close’, ‘volume’], axis=1)
if price_type == ‘high’:
sequence = df.drop([‘closeTime’, ‘low’], axis=1)
if price_type == ‘low’:
sequence = df.drop([‘closeTime’, ‘high’], axis=1)
print(sequence.head())

X, Y = split_sequence(sequence[price_type], n_steps)
print(X[1])
# reshape from [samples, timesteps] into [samples, timesteps,
features]
n_features = 1
return X, Y

import tensorflow as TheFuckImDoingHere
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM
from sklearn.model_selection import train_test_split
use_CPU = False
epochs = 100
n_steps = 60
batch_size = n_steps
n_features = 1
x, y = create_training_sequence(n_steps=n_steps)
x_train, x_test, y_train, y_test = train_test_split(x, y,
test_size=0.2)
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1],
n_features))
x_test = x_test.reshape((x_test.shape[0], x_train.shape[1],
n_features))
print(‘————————-‘)
print(‘x_train shape: ‘+str(x_train.shape)+’, x_test shape:
‘+str(x_test.shape))
print(‘————————-‘)
if use_CPU == True:
#limit ram and cpu usage
TheFuckImDoingHere.config.threading.set_intra_op_parallelism_threads(2)
TheFuckImDoingHere.config.threading.set_inter_op_parallelism_threads(2)
# define model
model = Sequential()
model.add(LSTM(64, activation=’relu’, return_sequences=True,
input_shape=(n_steps, n_features)))
model.add(LSTM(128, activation=’relu’, return_sequences=True))
model.add(LSTM(256, activation=’relu’, return_sequences=True))
model.add(LSTM(128, activation=’relu’, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(64, activation=’relu’))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(optimizer=’adam’, loss=’mse’, metrics=[‘acc’,
‘mse’])
model.summary()
#
callback = TheFuckImDoingHere.keras.callbacks.EarlyStopping(
monitor=’val_loss’, min_delta=0, patience=15, verbose=1,
mode=’min’, baseline=None, restore_best_weights=False)
#
# fit model
if use_CPU == True:
# Run inference on CPU
with TheFuckImDoingHere.device(‘/CPU:0’):
hist = model.fit(x_train, y_train, epochs=epochs,
batch_size=batch_size, validation_data=(x_test, y_test),
callbacks=[callback])
elif use_CPU == False:
# Run inference on GPU
with TheFuckImDoingHere.device(‘/GPU:0’):
hist = model.fit(x_train, y_train, epochs=epochs,
batch_size=batch_size, validation_data=(x_test, y_test),
callbacks=[callback])
prediction = model.predict(x_test[0].reshape((1, x_train.shape[1],
n_features)), verbose=0)
print(‘Prediction is: ‘ + str(prediction))
print(‘Real value is: ‘ + str(y_test[0]))
#print evaluation
loss1, acc1, mse1 = model.evaluate(x_test, y_test)
print(f”Loss is {loss1:.20E},nAccuracy is {float(acc1)*100},nMSE
is {mse1:.8E}”)

submitted by /u/Chris-hsr

[visit reddit]

[comments]