How do I make an image classifier with size (200,200,1) perform
well I am only getting 30% accuracy is it due to my hardware I dont
have a gpu
submitted by /u/c0d3r_
[visit
reddit] [comments]
How do I make an image classifier with size (200,200,1) perform
well I am only getting 30% accuracy is it due to my hardware I dont
have a gpu
submitted by /u/c0d3r_
[visit
reddit] [comments]
submitted by /u/AugmentedStartups [visit reddit] [comments] |
plt.figure(figsize=(10,10))
for i in range(25): plt.subplot(5,5,i+1) plt.xticks([])
plt.yticks([]) plt.grid(False) plt.imshow(train_images[i],
cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]])
plt.show()
submitted by /u/Real_Scholar2762
[visit reddit]
[comments]
Hello, I’m completely new to tensorflow. Right now I’m trying
out a
training script on two different datasets using tensorflow
1.13.0, and got stuck when it was trying to pass an empty directory
PRETRAINED_MODEL_PATH to
tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH),
PRETRAINED_MODEL_PATH = '' saver = tf.train.Saver([v for v in tf.get_collection_ref(tf.GraphKeys.GLOBAL_VARIABLES) if('lr' not in v.name) and ('batch' not in v.name)]) ckptstate = tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH)
The two datasets are getting two different responses when
passing an empty directory to tf.train.get_checkpoint_state(). The
first dataset I tried outputs a warning, but the training
continues.
WARNING:tensorflow:FailedPreconditionError: checkpoint; Is a directory WARNING:tensorflow:checkpoint: Checkpoint ignored
The second dataset I tried outputs an error and script ends.
Traceback (most recent call last): File "cam_est/train_sdf_cam.py", line 827, in <module> train() File "cam_est/train_sdf_cam.py", line 495, in train ckptstate = tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH) File "/home/jg/anaconda3/envs/tf_trimesh/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_management.py", line 278, in get_checkpoint_state + checkpoint_dir) ValueError: Invalid checkpoint state loaded from
I have tried everything I can think of but still can’t figure
out the problem. Can someone help please?
submitted by /u/HistoricalTouch0
[visit reddit]
[comments]
Hello everyone, sos I am following an online tutorial on how to run gesture Here is my github for what I am working on btw. And here is the https://github.com/riccrdo5/help Ty and happy holidays submitted by /u/RicardoCarlos55 |
def get_model_2(input_shape): model = Sequential() model.add(Conv2D(64, (5, 5), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(3, 3))) model.add(Conv2D(128, (4, 4), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # model.add(Conv2D(512, (3, 3), activation='relu')) # model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (2, 2), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(512, activation='relu')) # model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) return model
Why do I get the following error when I un-comment that middle
layer?
ValueError: Negative dimension size caused by subtracting 2 from
1 for ‘{{node max_pooling2d_4/MaxPool}} = MaxPool[T=DT_FLOAT,
data_format=”NHWC”, explicit_paddings=[], ksize=[1, 2, 2, 1],
padding=”VALID”, strides=[1, 2, 2, 1]](Placeholder)’ with input
shapes: [?,1,1,512].
submitted by /u/BananaCharmer
[visit reddit]
[comments]
I wrote a custom model using a custom loss function. The layers
are all basic keras layers but the loss function is a custom. How
do I move this to a high performance serving scenario? I don’t need
to do training – just prediction. Suggestions? Tutorials?
submitted by /u/i8code
[visit reddit]
[comments]
submitted by /u/TheCodingBug [visit reddit] [comments] |
Im Training a vq-vae on audio data (spectrograms), but the
posterior always collapses. Anyone an idea how to avoid that?
submitted by /u/Ramox_Phersu
[visit reddit]
[comments]
Hi, All,
I have written a script to export a pre-trained TensorFlow model
for inference. The inference code is for the code present at this
directory –https://github.com/sabarim/itis.
I took a reference from the Deeplab export_model.py script to
write a similar one for this model.
Reference script link:
https://github.com/tensorflow/models/blob/master/research/deeplab/export_model.py
My script:
https://projectcode1.s3-us-west-1.amazonaws.com/export_model.py
I am getting an error, when I try to run inference from the
saved model.
FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value
decoder/feature_projection0/BatchNorm/moving_variance [[{{node
decoder/feature_projection0/BatchNorm/moving_variance/read}}]]
[[SemanticPredictions/_13]] (1) Failed precondition: Attempting to
use uninitialized value
decoder/feature_projection0/BatchNorm/moving_variance [[{{node
decoder/feature_projection0/BatchNorm/moving_variance/read}}]] 0
successful operations. 0 derived errors ignored.
Could anyone please take a look and help me understand the
problem.
submitted by /u/DamanpKaur
[visit reddit]
[comments]