Categories
Misc

Why my model perform really bad even though with val_accuracy: 0.9800

Why my model perform really bad even though with val_accuracy: 0.9800

Hello I’m trying to make an image classifier which classifies given tomato plant leaf as [‘Tomato___Early_blight’, ‘Tomato___Septoria_leaf_spot’, ‘Tomato___healthy’]. I took the dataset from here It is already augmented and from that I took only Tomato plant leaf images and further reduced it to only three classes as mentioned before.

Here is my code

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing.image import load_img, img_to_array

import numpy as np

import matplotlib.pyplot as plt

train_gen = ImageDataGenerator(rescale=1./255)

test_gen = ImageDataGenerator(rescale=1./255)

train_data = train_gen.flow_from_directory(directory=’/Users/saibalaji/Documents/TensorFlowProjects/TomatoDataSet/train’,target_size=(256,256))

valiation_data = test_gen.flow_from_directory(directory=’/Users/saibalaji/Documents/TensorFlowProjects/TomatoDataSet/valid’,target_size=(256,256))

model = tf.keras.models.Sequential([

tf.keras.layers.BatchNormalization(),

tf.keras.layers.Conv2D(32, 3, activation=’relu’),

tf.keras.layers.MaxPooling2D(),

tf.keras.layers.Conv2D(64, 3, activation=’relu’),

tf.keras.layers.MaxPooling2D(),

tf.keras.layers.Conv2D(128, 3, activation=’relu’),

tf.keras.layers.MaxPooling2D(),

tf.keras.layers.Flatten(),

tf.keras.layers.Dense(256, activation=’relu’),

tf.keras.layers.Dense(3, activation= ‘softmax’)

])

model.compile(optimizer=tf.optimizers.RMSprop(learning_rate=0.001),loss=tf.losses.categorical_crossentropy,metrics=[‘accuracy’])

model.fit(train_data,epochs=12,validation_data=valiation_data)

This is my prediction code

#load the image

my_image = load_img(‘/Users/saibalaji/Documents/TensorFlowProjects/TomatoDataSet/train/Tomato___Septoria_leaf_spot/ffd3c6f3-17d3-45f1-a599-2623e111ec71___Matt.S_CG 6493.JPG’, target_size=(256, 256))

plt.imshow(my_image)

#preprocess the image

my_image = img_to_array(my_image)

expand_image = np.expand_dims(my_image, axis=0)

print(expand_image.shape)

#make the prediction

prediction = model.predict(expand_image)

plt.xlabel(class_labels[np.argmax(prediction)])

https://preview.redd.it/364hamfitnp71.png?width=770&format=png&auto=webp&s=f27ae6c0fb5a703141a526d5ce755f5f74f13afd

As you can see my model validation accuracy is good but its classifications are really bad even for the images from training dataset. How can I solve this problem .

https://preview.redd.it/ebgy8k70tnp71.png?width=1732&format=png&auto=webp&s=7a45c1b3b34692f834c1658a78f992d76e78e123

submitted by /u/kudoshinichi-8211
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *