Categories
Misc

Trained a model w. Keras in Python w. custom loss function. How can I deploy it for inference with Tensorflow Serving – aka how to define a custom loss function or just disable that part?

I wrote a custom model using a custom loss function. The layers
are all basic keras layers but the loss function is a custom. How
do I move this to a high performance serving scenario? I don’t need
to do training – just prediction. Suggestions? Tutorials?

submitted by /u/i8code

[visit reddit]

[comments]

Categories
Misc

ValueError: Negative dimension size caused by subtracting 2 from 1

def get_model_2(input_shape): model = Sequential() model.add(Conv2D(64, (5, 5), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(3, 3))) model.add(Conv2D(128, (4, 4), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # model.add(Conv2D(512, (3, 3), activation='relu')) # model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (2, 2), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(512, activation='relu')) # model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) return model 

Why do I get the following error when I un-comment that middle
layer?

ValueError: Negative dimension size caused by subtracting 2 from
1 for ‘{{node max_pooling2d_4/MaxPool}} = MaxPool[T=DT_FLOAT,
data_format=”NHWC”, explicit_paddings=[], ksize=[1, 2, 2, 1],
padding=”VALID”, strides=[1, 2, 2, 1]](Placeholder)’ with input
shapes: [?,1,1,512].

submitted by /u/BananaCharmer

[visit reddit]

[comments]

Categories
Misc

How to fix VQ-VAE postirior collapse?

Im Training a vq-vae on audio data (spectrograms), but the
posterior always collapses. Anyone an idea how to avoid that?

submitted by /u/Ramox_Phersu

[visit reddit]

[comments]

Categories
Misc

YOLOv4 Face Recognition on Custom Dataset


YOLOv4 Face Recognition on Custom Dataset
submitted by /u/TheCodingBug

[visit reddit]

[comments]
Categories
Misc

Tensorflow on ARM Devices

Hey everyone,

I’m using a Surface Pro X and wanted to get a little bit into
Deep Learning and neural networks, and wanted to use Tensorflow. Is
it possible/How is it possible to install Tensorflow on ARM
devices? Seems to me like the first big hurdle is the one that I
can’t install Python as a 64-bit-version. Should I maybe use an
emulation? Thanks for any help!

submitted by /u/Hot-Ad-3651

[visit reddit]

[comments]

Categories
Misc

Export a model for inference.

Hi, All,

I have written a script to export a pre-trained TensorFlow model
for inference. The inference code is for the code present at this
directory –https://github.com/sabarim/itis.

I took a reference from the Deeplab export_model.py script to
write a similar one for this model.

Reference script link:
https://github.com/tensorflow/models/blob/master/research/deeplab/export_model.py

My script:

https://projectcode1.s3-us-west-1.amazonaws.com/export_model.py

I am getting an error, when I try to run inference from the
saved model.

FailedPreconditionError: 2 root error(s) found.

(0) Failed precondition: Attempting to use uninitialized value
decoder/feature_projection0/BatchNorm/moving_variance [[{{node
decoder/feature_projection0/BatchNorm/moving_variance/read}}]]
[[SemanticPredictions/_13]] (1) Failed precondition: Attempting to
use uninitialized value
decoder/feature_projection0/BatchNorm/moving_variance [[{{node
decoder/feature_projection0/BatchNorm/moving_variance/read}}]] 0
successful operations. 0 derived errors ignored.

Could anyone please take a look and help me understand the
problem.

submitted by /u/DamanpKaur

[visit reddit]

[comments]

Categories
Misc

Tensorboard: Should I be using the smoothed or normal value for evaluating accuracy?

Hi Everyone,

Pretty much in the title. I’m pretty sure that the smoothed
values are some sort of exponential moving average.

When evaluating the accuracy of the model (say, the accuracy I
want to tell people my model can achieve on the validation set, for
some nth epoch), should I be using the smoothed value or the normal
value? I take the accuracy every epoch.

Of course, this is before the ultimate test on the test set, but
before doing that, to kind of figure out what my max accuracy is,
and to gauge if the hyperparamaters i’m choosing are working,
should I be going by the smoothed or not smoothed values?

An example:

On step no. 151 (epoch 8)

smoothed accuracy (with smooth = 0.6) is 36.25%

“real” accuracy is 42.86%

Is my actual accuracy 36.25% or 42.86%? i

Thanks!

A

submitted by /u/kirbyburgers

[visit reddit]

[comments]

Categories
Misc

How to get the equation that a multiple linear regression model is using in Keras w/ Tensorflow?

I have the weights and biases for both the normalizer and the
Dense layer in my model, but I am unsure how to convert these
values into 1 equation that the computer is using to predict
values, which I would like to know. The model takes 2 independent
values and predicts 1 value, so using the weights and biases below,
how I could formulate an equation:

weights for normalizer layer: [ 8.89 11.5 ]

biases for normalizer layer: [321.69 357.53]

(not even sure if the normalizer biases and weights matter as it
is part of preprocessing)

weights for Dense layer: [[ 0.08] [19.3 ]]

biases for Dense layer: [11.54]

Thank you very much and I would greatly appreciate any help!
🙂

submitted by /u/HexadecimalHero

[visit reddit]

[comments]

Categories
Misc

Does my model make sense? It’s looking thicc but I don’t know

I’ve built my first model and I’ve not very experienced so I’m
unsure if it’s structured correctly.

I have the VGG16 model on top (frozen) and I connect this to a
dene layer that I train on categorical data (6 classes)

_________________________________________________________________ Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 150, 150, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 150, 150, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 75, 75, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 75, 75, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 75, 75, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 37, 37, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 37, 37, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 18, 18, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 9, 9, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 128) 1048704 _________________________________________________________________ dense_1 (Dense) (None, 6) 774 ================================================================= Total params: 15,764,166 Trainable params: 1,049,478 Non-trainable params: 14,714,688 _________________________________________________________________ 

I want to apply what the model has learnt thus far to a binary
classification problem. So, once trained on my categorical data, I
freeze `dense` and remove `dense_1`, then I add in `dense_2`,
`dense_3`, `dense_4` (the latter having 1 output).

continued from before.... block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 128) 1048704 _________________________________________________________________ dense_2 (Dense) (None, 128) 16512 _________________________________________________________________ dense_3 (Dense) (None, 128) 16512 _________________________________________________________________ dense_4 (Dense) (None, 1) 129 ================================================================= Total params: 15,796,545 Trainable params: 33,153 Non-trainable params: 15,763,392 

Then I train it on my binary data (I have setup augmentation and
preprocessing, etc.)

Does this network make sense though? I don’t have the deep
understanding many people here do, so not really sure. Any input
would be appreciated.

submitted by /u/BananaCharmer

[visit reddit]

[comments]

Categories
Misc

Most tutorials seem outdated

I’ve been learning machine learning from uni, but I haven’t done
as much practical stuff as I’d like so I decided to do some in the
holidays.

Most of the books I’ve looked at (Deep learning pipeline). These
are pretty recent (2018ish) but mostly seem to either feature
tensorflow 1, need a previous version of keras to be compatible,
etc etc. Things like the Mnist dataset are also in different forms
across different versions.

For tensorflow I’ve been just using

tf.compat.v1.function() 

To just keep compatibility with tensorflow 1 so I can follow
along with the examples better, but should I just try to find
something more recent than 2018?

One of the tutorials also wanted me to run all code on an ubuntu
google cloud machine?

Are there any super good tensorflow books that are up to date
that you’d recommend? I’ve literally just been searching for deep
learning at the university online library.

It seems kinda dumb that the way the framework operates changes
so much in such a short period of time. I’m willing to put time in,
but I don’t want to go through a 500 page book to realize that
everything is now obsolete. Also how the hell do people working in
the industry deal with this, when half of the code they’ve written
is now not compatible with the main version.

submitted by /u/eht_amgine_enihcam

[visit reddit]

[comments]