Loss functions

A set of custom loss functions

source

MSELoss

 MSELoss (inp:Any, targ:Any)

source

L1Loss

 L1Loss (inp:Any, targ:Any)

SSIMLoss

 SSIMLoss (spatial_dims:int, data_range:float=1.0,
           kernel_type:monai.metrics.regression.KernelType|str=gaussian,
           win_size:int|collections.abc.Sequence[int]=11,
           kernel_sigma:float|collections.abc.Sequence[float]=1.5,
           k1:float=0.01, k2:float=0.03,
           reduction:monai.utils.enums.LossReduction|str=mean)

*Compute the loss function based on the Structural Similarity Index Measure (SSIM) Metric.

For more info, visit https://vicuesoft.com/glossary/term/ssim-ms-ssim/

SSIM reference paper: Wang, Zhou, et al. “Image quality assessment: from error visibility to structural similarity.” IEEE transactions on image processing 13.4 (2004): 600-612.*

Combined Losses


source

CombinedLoss

 CombinedLoss (spatial_dims=2, alpha=0.33, beta=0.33)

losses combined


source

MSSSIMLoss

 MSSSIMLoss (spatial_dims=2, window_size:int=11, sigma:float=1.5,
             reduction:str='mean', levels:int=5, weights=None)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*

msssim_loss = MSSSIMLoss(levels=3)
ssim_loss = SSIMLoss(2)
output = torch.rand(10, 3, 64, 64).cuda()  # Example output
target = torch.rand(10, 3, 64, 64).cuda()  # Example target
loss = msssim_loss(output, target)
loss2 = ssim_loss(output,target)
print("ms-ssim: ",loss, '\nssim: ', loss2)
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

source

MSSSIML1Loss

 MSSSIML1Loss (spatial_dims=2, alpha:float=0.025, window_size:int=11,
               sigma:float=1.5, reduction:str='mean', levels:int=3,
               weights=None)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*

msssiml1_loss = MSSSIML1Loss(alpha=0.025, window_size=11, sigma=1.5, levels=3)
input_image = torch.randn(4, 1, 128, 128)  # Batch of 4 grayscale images (1 channel)
target_image = torch.randn(4, 1, 128, 128)

# Compute MSSSIM + Gaussian-weighted L1 loss
loss = msssiml1_loss(input_image, target_image)
loss2 = ssim_loss(input_image, target_image)
print("ms-ssim: ", loss, '\nssim: ', loss2)
ms-ssim:  tensor(0.0248) 
ssim:  tensor(0.9946)

source

MSSSIML2Loss

 MSSSIML2Loss (spatial_dims=2, alpha:float=0.1, window_size:int=11,
               sigma:float=1.5, reduction:str='mean', levels:int=3,
               weights=None)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*

msssim_l2_loss = MSSSIML2Loss()
output = torch.rand(10, 3, 64, 64).cuda()  # Example output with even dimensions
target = torch.rand(10, 3, 64, 64).cuda()  # Example target with even dimensions
loss = msssim_l2_loss(output, target)
print(loss)
tensor(0.0963, device='cuda:0')

Dice Loss


source

DiceLoss

 DiceLoss (smooth=1)

*DiceLoss computes the Sørensen–Dice coefficient loss, which is often used for evaluating the performance of image segmentation algorithms.

The Dice coefficient is a measure of overlap between two samples. It ranges from 0 (no overlap) to 1 (perfect overlap). The Dice loss is computed as 1 - Dice coefficient, so it ranges from 1 (no overlap) to 0 (perfect overlap).

Attributes: smooth (float): A smoothing factor to avoid division by zero and ensure numerical stability.

Methods: forward(inputs, targets): Computes the Dice loss between the predicted probabilities (inputs) and the ground truth (targets).*

# inputs and targets must be equally dimensional tensors
from torch import randn, randint
inputs = randn((1, 1, 256, 256))  # Input
targets = randint(0, 2, (1, 1, 256, 256)).float()  # Ground Truth

# Initialize
dice_loss = DiceLoss()

# Compute loss
loss = dice_loss(inputs, targets)
print('Dice Loss:', loss.item())
Dice Loss: 0.4992988705635071

Fourier Ring Correlation


source

FRCLoss

 FRCLoss (image1, image2)

*Compute the Fourier Ring Correlation (FRC) loss between two images.

Args:

- image1 (torch.Tensor): The first input image.
- image2 (torch.Tensor): The second input image.

Returns:

- torch.Tensor: The FRC loss.*

source

FCRCutoff

 FCRCutoff (image1, image2)

*Calculate the cutoff frequency at when Fourier ring correlation drops to 1/7.

Args:

- image1 (torch.Tensor): The first input image.
- image2 (torch.Tensor): The second input image.

Returns:

- float: The cutoff frequency.*