Categories
Misc

Why I am getting less accuracy and big loss?

I chose the TensorFlow estimator for implementation due to having a distributed training API support. Well, honestly saying, I found a code, which was quite understandable. So I chose that to implement sensor-based signal recognition on multiple GPUs.

The code was basically for MNIST data set training on multiple GPUs. I got the error after executing the code, and that error was coming due to not working mnist dataset API downloading. Here is the link to that code. https://github.com/shu-yusa/tensorflow-mirrored-strategy-sample/blob/master/cnn_mnist.py

I could not found any solution on google. There might be are multiple issues behind that; implementing in Tensorflow1 can be one of them. I tried to convert that code into Tensorflow2. Mostly code is transformed; however, tf.contrib related things did not restore. So I decided to edit that code for sensor-based signal (Time series).

However, when I ran the code, the accuracy was 30%, and the lost value was bigger. On the other hand, when I implemented CNN on the same data set on Low-level tensor API, I received 95% accuracy. Now I do not know why it is giving low accuracy on the tf estimator. In my opinion, one of the reasons can be wrong input feeding to the CNN. Here is the code:

def cnn_model_fn(features, labels, mode):

“””Model function for CNN.”””

# Input Layer

# Reshape X to 4-D tensor: [batch_size, width, height, channels]

# input 1 * segment_size, and have three channel, in accelrometer we have x, y, z

input_layer = tf.reshape(features[“x”], [-1, 1, segment_size, num_input_channels])

# Convolutional Layer #1

# Computes 32 features using a 5×5 filter with ReLU activation.

# Padding is added to preserve width and height.

# Input Tensor Shape: [batch_size, 28, 28, 1]

# Output Tensor Shape: [batch_size, 28, 28, 32]

conv1 = tf.compat.v1.layers.conv2d(

inputs=input_layer,

filters=32,

kernel_size=[1, 12],

padding=”same”,

activation=tf.nn.relu)

# Pooling Layer #1

# First max pooling layer with a 2×2 filter and stride of 2

# Input Tensor Shape: [batch_size, 28, 28, 32]

# Output Tensor Shape: [batch_size, 14, 14, 32]

pool1 = tf.compat.v1.layers.max_pooling2d(inputs=conv1, pool_size=[1, 4], strides=2, padding=’same’)

# Convolutional Layer #2

# Computes 64 features using a 5×5 filter.

# Padding is added to preserve width and height.

# Input Tensor Shape: [batch_size, 14, 14, 32]

# Output Tensor Shape: [batch_size, 14, 14, 64]

conv2 = tf.compat.v1.layers.conv2d(

inputs=pool1,

filters=64,

kernel_size=[1, 12],

padding=”same”,

activation=tf.nn.relu)

# Pooling Layer #2

# Second max pooling layer with a 2×2 filter and stride of 2

# Input Tensor Shape: [batch_size, 14, 14, 64]

# Output Tensor Shape: [batch_size, 7, 7, 64]

pool2 = tf.compat.v1.layers.max_pooling2d(inputs=conv2, pool_size=[1, 4], strides=2, padding=’same’)

# Flatten tensor into a batch of vectors

# Input Tensor Shape: [batch_size, 7, 7, 64]

# Output Tensor Shape: [batch_size, 7 * 7 * 64]

pool2_flat = tf.reshape(pool2, [-1, 1 * 50 * 64])

# Dense Layer

# Densely connected layer with 1024 neurons

# Input Tensor Shape: [batch_size, 7 * 7 * 64]

# Output Tensor Shape: [batch_size, 1024]

dense = tf.compat.v1.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)

# Add dropout operation; 0.6 probability that element will be kept

dropout = tf.compat.v1.layers.dropout(

inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)

# Logits layer

# Input Tensor Shape: [batch_size, 1024]

# Output Tensor Shape: [batch_size, 10]

logits = tf.compat.v1.layers.dense(inputs=dropout,

units=6) # unit =10 in our case we have 6 classes so will 6 units at last layer

predictions = {

# Generate predictions (for PREDICT and EVAL mode)

“classes”: tf.argmax(input=logits, axis=1),

# Add softmax_tensor` to the graph. It is used for PREDICT and by the`

# logging_hook`.`

“probabilities”: tf.nn.softmax(logits, name=”softmax_tensor”)

}

if mode == tf.estimator.ModeKeys.PREDICT:

return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

# labels = tf.argmax(tf.cast(labels, dtype=tf.int32), 1)

# Calculate Loss (for both TRAIN and EVAL modes)

loss = tf.compat.v1.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)

# here we define how we calculate our accuracy

# if you want to monitor your training accuracy you need these two lines

# accuracy = tf.compat.v1.metrics.accuracy(labels=labels, predictions=predictions[‘classes’], name=’acc_op’)

# tf.summary.scalar(‘accuracy’, accuracy[1])

# Configure the Training Op (for TRAIN mode)

if mode == tf.estimator.ModeKeys.TRAIN:

optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.001)

train_op = optimizer.minimize(

loss=loss,

global_step=tf.compat.v1.train.get_global_step())

return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)

# Add evaluation metrics (for EVAL mode)

eval_metric_ops = {

“accuracy”: tf.compat.v1.metrics.accuracy(labels,

predictions=predictions[“classes”])}

return tf.estimator.EstimatorSpec(

mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)

After debugging:

in `def cnn_model_fn(features, labels, mode):` I am getting {‘x’: <tf.Tensor ‘IteratorGetNext:0’ shape=(?, 600) dtype=float64>} and in labels Tensor(“IteratorGetNext:1”, shape=(?,), dtype=int64) and in mode {str} train.

**Here the result on test data:**

Saving ‘checkpoint_path’ summary for global step 1000: /tmp/tmp77ffy2i9/model.ckpt-1000

{‘accuracy’: 0.3959022, ‘loss’: 1.698279, ‘global_step’: 1000}**

Can anyone help why my model giving less accuracy and huge loss value?

submitted by /u/Nafees060
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *