Skip to content

DICE coefficient loss function #99

@mongoose54

Description

@mongoose54

@FabianIsensee
I am trying to modify the categorical_crossentropy loss function to dice_coefficient loss function in the Lasagne Unet example. I found this implementation in Keras and I modified it for Theano like below:

def dice_coef(y_pred,y_true):
smooth = 1.0
y_true_f = T.flatten(y_true)
y_pred_f = T.flatten(T.argmax(y_pred, axis=1))
intersection = T.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (T.sum(y_true_f) + T.sum(y_pred_f) + smooth)

def dice_coef_loss(y_pred, y_true):
return dice_coef(y_pred, y_true)

I am not sure if there is problem with my implementation or Dice coefficient is not robust:. See output during training validation. In comparison when I use categorical crossentropy I get like AUC > 0.98. I was wondering if anyone has played with Dice on UNet.

Started Experiment
Epoch: 0
train accuracy: 0.129538 train loss: -0.146859272992
val accuracy: 0.209342 val loss: -0.282476756789 val AUC score: 0.776537015996
Epoch: 1
train accuracy: 0.418164 train loss: -0.110509629949
val accuracy: 0.820385 val loss: -0.00156800820105 val AUC score: 0.5
Epoch: 2
train accuracy: 0.375172 train loss: -0.129266330856
val accuracy: 0.790744 val loss: -0.00923636992406 val AUC score: 0.5
Epoch: 3
train accuracy: 0.581028 train loss: -0.0889976615506
val accuracy: 0.194278 val loss: -0.279695818208 val AUC score: 0.5

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions