A collection of Metrics used in the segmentation models
from fastai.vision.all import *
import numpy as np
from torch.nn.modules.loss import _Loss
import segmentation_models_pytorch as smp
from steel_segmentation.utils import get_train_df
from steel_segmentation.transforms import SteelDataBlock, SteelDataLoaders
path = Path("../data")
train_pivot = get_train_df(path=path, pivot=True)
block = SteelDataBlock(path)
dls = SteelDataLoaders(block, train_pivot, bs=8)
xb, yb = dls.one_batch()
print(xb.shape, xb.device)
print(yb.shape, yb.device)
C:\Users\beanTech\miniconda3\envs\steel_segmentation\lib\site-packages\torch\_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  ..\aten\src\ATen\native\BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)
torch.Size([8, 3, 224, 1568]) cuda:0
torch.Size([8, 4, 224, 1568]) cpu
device = "cuda" if torch.cuda.is_available() else "cpu"
device
'cuda'
model = smp.Unet("resnet18", classes=4).to(device)

logits = model(xb)
probs = torch.sigmoid(logits)
preds = ( probs > 0.5).float()

Kaggle Dice metric

The competition evaluation metric is defined as:

This competition is evaluated on the mean Dice coefficient. The Dice coefficient can be used to compare the pixel-wise agreement between a predicted segmentation and its corresponding ground truth. The formula is given by:$$J(A,B) = \frac{2 * |A \cap B|}{|A| \cup |B|} $$

where X is the predicted set of pixels and Y is the ground truth. The Dice coefficient is defined to be 1 when both X and Y are empty. The leaderboard score is the mean of the Dice coefficients for each <ImageId, ClassId> pair in the test set.

In this section there are all the metric that can be used to evaluate the performances of the segmentation models trained.

Simulated training with compute_val and a test Learner with TstLearner.

@delegates()
class TstLearner(Learner):
    def __init__(self,dls=None,model=None,**kwargs): 
        self.pred,self.xb,self.yb = None,None,None
        self.loss_func=BCEWithLogitsLossFlat()
        
#Go through a fake cycle with various batch sizes and computes the value of met
def compute_val(met, pred, y):
    met.reset()
    vals = [0,6,15,20]
    learn = TstLearner()
    for i in range(3):
        learn.pred = pred[vals[i]:vals[i+1]]
        learn.yb = ( y[vals[i]:vals[i+1]], )
        met.accumulate(learn)
    return met.value

Multiclass Dice

The fastai library comes with a dice metric for multiple channel masks. As a segmentation metric in this frameworks, it expects a flatten mask for targets.

multidice_obj = DiceMulti()
compute_val(multidice_obj, pred=preds.detach().cpu(), y=yb.argmax(1))
0.1798790120410166

Here we slightly change the DiceMulti for a 4-channel mask as targets.

class ModDiceMulti[source]

ModDiceMulti(axis=1, with_logits=False) :: Metric

Averaged Dice metric (Macro F1) for multiclass target in segmentation

dice_obj = ModDiceMulti(with_logits=True)
compute_val(dice_obj, pred=logits.detach().cpu(), y=yb)
0.2130325182791189
dice_obj = ModDiceMulti()
compute_val(dice_obj, pred=preds.detach().cpu(), y=yb)
0.2130325182791189