I am trying to perform an image segmentation task on 3d images. As preprocessing I slice the images into 2d images.
The images do contain multiple labels. As a loss function I use the tf.keras.losses.SparseCategoricalCrossentropy and as metric I do use tf.keras.metrics.SparseCategoricalAccuracy.
When I train the model on 2 2d slices the accuracy becomes 0.99 and the loss is very low, however when I save the model and predict on the same 2 images, the prediction are completely wrong.
Has anybody an idea what I could be missing?
submitted by /u/Successful-Ad-8021
[visit reddit] [comments]