I am using Keras for boundary/contour detection using a Unet. When I use binary cross-entropy as the loss, the losses decrease over time as expected the predicted boundaries look reasonable

However, I have tried custom loss for Dice with varying LRs, none of them are working well.

smooth = 1e-6 def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def dice(y_true, y_pred): return 1-dice_coef(y_true, y_pred)

the loss values don’t improve. That is, it will show something like

loss: nan - dice: .9607 - val_loss: nan - val_dice: .9631

I get NaNs for the losses and values for dice and val_dice that barely change as the epochs iterate. This is regardless of what I use for the LR, whether it be .01 to 1e-6

The dimensions of the train images/labels looks like N x H x W x 1, where N is the number of images, H/W are the height/width of each image

can anyone help?

submitted by /u/74throwaway

[visit reddit] [comments]