Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

difference between dice coefficient and weighted dice coefficient #1

Open
sneh-debug opened this issue Feb 17, 2020 · 2 comments
Open

Comments

@sneh-debug
Copy link

why in weighted dice coefficient axis=[-3, -2, -1] ?? ?

@woodywff
Copy link
Owner

woodywff commented Feb 17, 2020

That's according to the definition of the weighted dice coefficient which could be found here. The difference is that weighted dice firstly sums up the 3d image voxel values (0 or 1) and then calculate the mean value along the other dimensions. While the dice coefficient sums up values along all dimensions once for all.

@sneh-debug
Copy link
Author

Build and Compile the model

out = out_GT
model = Model(inp, outputs=[out, out_VAE])  # Create the model
model.compile(
    adam(lr=1e-4),
    [loss_gt(dice_e), loss_VAE(input_shape, z_mean, z_var, weight_L2=weight_L2, weight_KL=weight_KL)],
    metrics=[dice_coefficient]
)

def dice_coefficient(y_true, y_pred):

y_true_f = K.flatten(y_true)

#y_pred_f = K.flatten(y_pred)
intersection = K.sum(K.abs(y_true * y_pred), axis=[-3,-2,-1])
dn = K.sum(K.square(y_true) + K.square(y_pred), axis=[-3,-2,-1]) + 1e-8
return K.mean(2 * intersection / dn, axis=[0,1])

def loss_gt(e=1e-8):
"""
loss_gt(e=1e-8)
------------------------------------------------------
Since keras does not allow custom loss functions to have arguments
other than the true and predicted labels, this function acts as a wrapper
that allows us to implement the custom loss used in the paper. This function
only calculates - L term of the following equation. (i.e. GT Decoder part loss)

L = - L<dice> + weight_L2 ∗ L<L2> + weight_KL ∗ L<KL>

Parameters
----------
`e`: Float, optional
A small epsilon term to add in the denominator to avoid dividing by
zero and possible gradient explosion.

Returns
-------
loss_gt_(y_true, y_pred): A custom keras loss function
This function takes as input the predicted and ground labels, uses them
to calculate the dice loss.

"""
def loss_gt_(y_true, y_pred):	
	#intersection = (K.sum(y_true * y_pred, axis=[-3,-2,-1]) + e/2)

	#axis=[-3,-2,-1]
	#dn = (K.sum(y_true, axis=[-3,-2,-1]) +( K.sum(y_pred, axis=[-3,-2,-1]) + e)
	#dn = (K.sum(y_true, axis=axis) + K.sum(y_pred,axis=axis) + e)

	#return - K.mean(2 *intersection/dn +e,axis=[0,1])
	intersection = K.sum(K.abs(y_true * y_pred), axis=[-3,-2,-1])	
	dn = K.sum(K.square(y_true) + K.square(y_pred), axis=[-3,-2,-1]) + e
	dice = 2 * intersection / dn
	dice_loss = 1 - dice
	print('dice loss',dice_loss)
	#return - K.mean(2 * intersection/dn, axis=[0,1])
	return dice_loss		
	

return loss_gt_   

def loss_VAE(input_shape, z_mean, z_var, weight_L2=0.1, weight_KL=0.1):
"""
loss_VAE(input_shape, z_mean, z_var, weight_L2=0.1, weight_KL=0.1)
------------------------------------------------------
Since keras does not allow custom loss functions to have arguments
other than the true and predicted labels, this function acts as a wrapper
that allows us to implement the custom loss used in the paper. This function
calculates the following equation, except for -L term. (i.e. VAE decoder part loss)

L = - L<dice> + weight_L2 ∗ L<L2> + weight_KL ∗ L<KL>

Parameters
----------
 `input_shape`: A 4-tuple, required
    The shape of an image as the tuple (c, H, W, D), where c is
    the no. of channels; H, W and D is the height, width and depth of the
    input image, respectively.
`z_mean`: An keras.layers.Layer instance, required
    The vector representing values of mean for the learned distribution
    in the VAE part. Used internally.
`z_var`: An keras.layers.Layer instance, required
    The vector representing values of variance for the learned distribution
    in the VAE part. Used internally.
`weight_L2`: A real number, optional
    The weight to be given to the L2 loss term in the loss function. Adjust to get best
    results for your task. Defaults to 0.1.
`weight_KL`: A real number, optional
    The weight to be given to the KL loss term in the loss function. Adjust to get best
    results for your task. Defaults to 0.1.
    
Returns
-------
loss_VAE_(y_true, y_pred): A custom keras loss function
    This function takes as input the predicted and ground labels, uses them
    to calculate the L2 and KL loss.
    
"""
def loss_VAE_(y_true, y_pred):
    c, H, W, D = input_shape
    n = c * H * W * D
    
    loss_L2 = K.mean(K.square(y_true - y_pred), axis=(1, 2, 3, 4)) # original axis value is (1,2,3,4).

    loss_KL = (1 / n) * K.sum(
        K.exp(z_var) + K.square(z_mean) - 1. - z_var,
        axis=-1
    )

    return weight_L2 * loss_L2 + weight_KL * loss_KL

return loss_VAE_ 

I am getting this output :
Epoch 37/90
1/2 [==============>...............] - ETA: 15s - loss: 1.0275 - Dec_GT_Output_loss: 0.9920 - Dec_VAE_Output_loss: 0.0356 - Dec_GT_Output_dice_coefficient: 0.002/2 [==============================] - 31s 16s/step - loss: 1.0285 - Dec_GT_Output_loss: 0.9942 - Dec_VAE_Output_loss: 0.0343 - Dec_GT_Output_dice_coefficient: 0.0058 - Dec_VAE_Output_dice_coefficient: 0.8532
why i am getting dec_VAE_output dicecoeffcient: 0.8532 and Dec_GT_output_dice_coefficient 0.0058 which is very low?
Please help me to resolve the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants