submitted by /u/AugmentedStartups [visit reddit] [comments] |
submitted by /u/AugmentedStartups [visit reddit] [comments] |
Qualcomm has released a lot of drivers and I signed up to work
with them. Need any guidance on how to speed up the performance or
how to setup the drivers LIKE in case of CUDA. If I just directly
use my tensorflow lite model would that give me the best
performance?
submitted by /u/chhab798
[visit reddit]
[comments]
It’s standard practice to finetune an object detection model for
a given task. Finetuning is part of the workflow of the
Tensorflow Object Detection workflow tutorial.
However, I have been tasked by a sceptical supervisor to show
that using a pretrained model actually improves performance. So I
need a way to reinitialise the parameters of
one of the pretrained TF Object detection models, so I can
train and convince the supervisor that finetuning is actually best
practice.
However, I haven’t found a way to do this – finetuning seems to
be baked in. Is there a way I can reinitalise the weights of the
network, following the Tensorflow Object Detection workflow
tutorial?
submitted by /u/pram-ila
[visit reddit]
[comments]
LSTM Tensorflow Input/Output Dimensions
I’m a little confused by what I’m getting vs. what I’m
expecting. I’m using Tensorflow 2.1 in Python 3.7 in Anaconda
3-2020.07
Here’s my problem:
- I want my output to be the next value in an hour-by-hour time
series. - My input has 99 features.
- I have 24,444 data points for training. Some of the data was
corrupted/reserved for validation.
I’m trying to build a 2 layer deep neural network using LSTM
layers:
model = Sequential() model.add(tensorflow.keras.layers.LSTM(64,
return_sequences=True, input_dim=99))
model.add(tensorflow.keras.layers.LSTM(32,
return_sequences=True))
model.add(tensorflow.keras.layers.Dense(1)
I plan to give it sets of data with 72 hours (3 days) of
sequential training.
So when I give my model training data:
model.fit(X_data, Y_data,
…)
I planned on giving X_data with dimensions of size [24444, 72,
99], where the first dimension 24444 describes the data points, the
72 describes the 72 hours of history, and the 99 describes my
training features.
My Y_data has dimensions of size [24444, 72, 1] where first
dimension 24444 describes my training points, 72 describes the
history, and 1 is my output feature.
My question is, when training is done, and I’m actively using my
model for predictions, what should my production input size be?
prediction = model.predict(production_data)
Should my production size be [1, 72, 99]? Where 1 is the number
of output points I expect, 72 is my history, and 99 my feature
size?
When I do this, I get an output size of [72, 1]. That feels…
weird?
What is the difference between feeding my model input of [72, 1,
99] vs [1, 72, 99]? Does the first case not proprogate the internal
state forward?
If I give my model [1, 1, 99] do I need to loop my model
predictions? And how would I do this?
submitted by /u/jyliu86
[visit reddit]
[comments]
So I found a pre-trained model that greatly interested me:
https://github.com/OMR-Research/tf-end-to-end
Probably irrelevant but I found it from this article and maybe
you’d like to read it too:
https://heartbeat.fritz.ai/play-sheet-music-with-python-opencv-and-an-optical-music-recognition-model-a55a3bea8fe
I wanted to play around with it in the context of an android
phone and I found out that tensorflow can support this easily if I
can just convert it into TFLite format. Problem is, I’ve been
having so much trouble getting the model into it, but it’s likely
my lack of experience in dealing with such a complex model.
This is the model I’ve been tooling with https://grfia.dlsi.ua.es/primus/models/PrIMuS/Semantic-Model.zip
So in order to get it into TFLite format, I needed to get its
.meta / .index / .data files into a frozen graph, however, to do
this you would need to know the input and output nodes which I had
trouble understanding, even with tensorboard summaries. Another
method I found was through the savedmodel format, however I was
getting all sorts of errors detailed in my stack overflow post that
you can maybe help with:
https://stackoverflow.com/questions/65572476/how-do-i-convert-a-meta-index-and-data-file-into-savedmodel-pb-format-with
So basically, I just want to convert my checkpoint file into a
usable file for inference and getting really lost and need some
advice.
submitted by /u/Vendredi46
[visit reddit]
[comments]
How do I make an image classifier with size (200,200,1) perform
well I am only getting 30% accuracy is it due to my hardware I dont
have a gpu
submitted by /u/c0d3r_
[visit
reddit] [comments]
submitted by /u/AugmentedStartups [visit reddit] [comments] |
Hello, I’m completely new to tensorflow. Right now I’m trying
out a
training script on two different datasets using tensorflow
1.13.0, and got stuck when it was trying to pass an empty directory
PRETRAINED_MODEL_PATH to
tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH),
PRETRAINED_MODEL_PATH = '' saver = tf.train.Saver([v for v in tf.get_collection_ref(tf.GraphKeys.GLOBAL_VARIABLES) if('lr' not in v.name) and ('batch' not in v.name)]) ckptstate = tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH)
The two datasets are getting two different responses when
passing an empty directory to tf.train.get_checkpoint_state(). The
first dataset I tried outputs a warning, but the training
continues.
WARNING:tensorflow:FailedPreconditionError: checkpoint; Is a directory WARNING:tensorflow:checkpoint: Checkpoint ignored
The second dataset I tried outputs an error and script ends.
Traceback (most recent call last): File "cam_est/train_sdf_cam.py", line 827, in <module> train() File "cam_est/train_sdf_cam.py", line 495, in train ckptstate = tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH) File "/home/jg/anaconda3/envs/tf_trimesh/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_management.py", line 278, in get_checkpoint_state + checkpoint_dir) ValueError: Invalid checkpoint state loaded from
I have tried everything I can think of but still can’t figure
out the problem. Can someone help please?
submitted by /u/HistoricalTouch0
[visit reddit]
[comments]
plt.figure(figsize=(10,10))
for i in range(25): plt.subplot(5,5,i+1) plt.xticks([])
plt.yticks([]) plt.grid(False) plt.imshow(train_images[i],
cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]])
plt.show()
submitted by /u/Real_Scholar2762
[visit reddit]
[comments]