Losses¶
Losses are critical to training a neural network well. The training can only make progress if you provide a meaningful measure of loss for each training step. What the loss looks like usually depends on your application. Pytorch has a number of loss functions that you can use out of the box. However, some more advanced and cutting edge loss functions exist that are not (yet) part of Pytorch. We include those below for your experimenting.
Caution: if you decide to use one of these, you will definitely want to peruse the source code first, as it has many additional useful notes and references which will help you.
Keep in mind that losses are specific to the type of task. Classification losses are computed differently from Segmentation losses. Within segmentation domain make sure to use BCE (Binary Cross Entropy) for any work involving binary masks (e.g. num_classes = 1) Make sure to read the documentation and notes (in the code) for each loss to understand how it is applied.
- Note:
- Logit is the vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function. The softmax function then generates a vector of (normalized) probabilities with one value for each possible class.
For example, BCEWithLogitsLoss is a BCE that accepts R((-inf, inf)) and automatically applies torch.sigmoid to convert it to ([0,1]) space.
However, if you use one-hot encoding or similar methods where you need to convert a tensor to pytorch from another source (e.g. numpy), you will need to make sure to apply the correct type to the resulting tensor. E.g. If y_hot is of type long and the BCE loss expects a Tensor of type float then you can try converting y_hot with y_hot = y_hot.type_as(output).
To convert predictions into (0,1) range you will sometimes need to use either softmax or sigmoid. Softmax is used for multi-classification in the Logistic Regression model, whereas Sigmoid is used for binary classification in the Logistic Regression model
-
class
pywick.losses.
ActiveContourLoss
(lambdaP=5.0, mu=1.0, is_binary: bool = False, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Learning Active Contour Models for Medical Image Segmentation Note that is only works for B/W masks right now… which is kind of the point of this loss as contours in RGB should be cast to B/W before computing the loss.
- Params:
param mu: (float, default=1.0) - Scales the inner region loss relative to outer region (less or more prominent) param lambdaP: (float, default=1.0) - Scales the combined region loss compared to the length loss (less or more prominent)
-
class
pywick.losses.
ActiveContourLossAlt
(len_w=1.0, reg_w=1.0, apply_log=True, is_binary: bool = False, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Learning Active Contour Models for Medical Image Segmentation Note that is only works for B/W masks right now… which is kind of the point of this loss as contours in RGB should be cast to B/W before computing the loss.
- Params:
param len_w: (float, default=1.0) - The multiplier to use when adding boundary loss. param reg_w: (float, default=1.0) - The multiplier to use when adding region loss. param apply_log: (bool, default=True) - Whether to transform the log into log space (due to the
-
class
pywick.losses.
AngularPenaltySMLoss
(in_features, out_features, loss_type='arcface', eps=1e-07, s=None, m=None, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
AsymLoss
(apply_nonlin=None, batch_dice=False, do_bg=True, smooth=1.0, square=False, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
BCELoss2d
(weight=None, size_average=True, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
BCEWithLogitsViewLoss
(weight=None, size_average=True, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Silly wrapper of nn.BCEWithLogitsLoss because BCEWithLogitsLoss only takes a 1-D array
-
class
pywick.losses.
BCEDiceTL1Loss
(threshold=0.5, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
BCEDicePenalizeBorderLoss
(kernel_size=55, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
BCEDiceFocalLoss
(focal_param, weights=None, **kwargs)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Parameters: - num_classes – number of classes
- gamma – (float,double) gamma > 0 reduces the relative loss for well-classified examples (p>0.5) putting more focus on hard misclassified example
- size_average – (bool, optional) By default, the losses are averaged over each loss element in the batch.
- weights – (list(), default = [1,1,1]) Optional weighing (0.0-1.0) of the losses in order of [bce, dice, focal]
-
class
pywick.losses.
BinaryFocalLoss
(gamma=1.333, eps=1e-06, alpha=1.0, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Implementation of binary focal loss. For multi-class focal loss use one of the other implementations.
gamma = 0 is equivalent to BinaryCrossEntropy Loss
-
class
pywick.losses.
ComboBCEDiceLoss
(use_running_mean=False, bce_weight=1, dice_weight=1, eps=1e-06, gamma=0.9, combined_loss_only=True, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Combination BinaryCrossEntropy (BCE) and Dice Loss with an optional running mean and loss weighing.
-
class
pywick.losses.
ComboSemsegLossWeighted
(use_running_mean=False, bce_weight=1, dice_weight=1, eps=1e-06, gamma=0.9, use_weight_mask=False, combined_loss_only=False, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
EncNetLoss
(se_loss=True, se_weight=0.2, nclass=19, aux=False, aux_weight=0.4, weight=None, ignore_index=-1, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
2D Cross Entropy Loss with SE Loss
Specifically used for EncNet. se_loss is the Semantic Encoding Loss from the paper Context Encoding for Semantic Segmentation. It computes probabilities of contexts appearing together.
Without SE_loss and Aux_loss this class simply forwards inputs to Torch’s Cross Entropy Loss (nn.CrossEntropyLoss)
-
class
pywick.losses.
FocalLoss
(l=0.5, eps=1e-06, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Weighs the contribution of each sample to the loss based in the classification error. If a sample is already classified correctly by the CNN, its contribution to the loss decreases.
Eps: Focusing parameter. eps=0 is equivalent to BCE_loss
-
class
pywick.losses.
FocalLoss2
(num_class, alpha=None, gamma=2, balance_index=-1, smooth=None, size_average=True, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
This is a implementation of Focal Loss with smooth label cross entropy supported which is proposed in ‘Focal Loss for Dense Object Detection. (https://arxiv.org/abs/1708.02002)’
Focal_Loss= -1*alpha*(1-pt)*log(pt)- Params:
param num_class: param alpha: (tensor) 3D or 4D the scalar factor for this criterion param gamma: (float,double) gamma > 0 reduces the relative loss for well-classified examples (p>0.5) putting more focus on hard misclassified example param smooth: (float,double) smooth value when cross entropy param balance_index: (int) balance class index, should be specific when alpha is float param size_average: (bool, optional) By default, the losses are averaged over each loss element in the batch.
-
class
pywick.losses.
HausdorffERLoss
(alpha=2.0, erosions=10, **kwargs)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Binary Hausdorff loss based on morphological erosion
-
forward
(pred: <sphinx.ext.autodoc.importer._MockObject object at 0x7fe3400fc898>, target: <sphinx.ext.autodoc.importer._MockObject object at 0x7fe3400fc978>, debug=False) → <sphinx.ext.autodoc.importer._MockObject object at 0x7fe3400fc9b0>[source]¶ Uses one binary channel: 1 - fg, 0 - bg pred: (b, 1, x, y, z) or (b, 1, x, y) target: (b, 1, x, y, z) or (b, 1, x, y)
-
-
class
pywick.losses.
HausdorffDTLoss
(alpha=2.0, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Binary Hausdorff loss based on distance transform
-
distance_field
¶ Used by autodoc_mock_imports.
-
forward
(logits: <sphinx.ext.autodoc.importer._MockObject object at 0x7fe3400fc5f8>, labels: <sphinx.ext.autodoc.importer._MockObject object at 0x7fe3400fc518>, debug=False, **_) → <sphinx.ext.autodoc.importer._MockObject object at 0x7fe3400fc7b8>[source]¶ Uses one binary channel: 1 - fg, 0 - bg pred: (b, 1, x, y, z) or (b, 1, x, y) target: (b, 1, x, y, z) or (b, 1, x, y)
-
-
class
pywick.losses.
LovaszSoftmax
(reduction='mean', **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
mIoULoss
(weight=None, size_average=True, num_classes=2, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
MixSoftmaxCrossEntropyOHEMLoss
(aux=False, aux_weight=0.4, weight=None, ignore_index=-1, **kwargs)[source]¶ Bases:
pywick.losses.OhemCrossEntropy2d
Loss taking into consideration class and segmentation targets together, as well as, using OHEM
-
class
pywick.losses.
OhemCELoss
(configer, is_binary=False)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
OhemCrossEntropy2d
(thresh=0.6, min_kept=0, ignore_index=-100, is_binary=True, **kwargs)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
OhemBCEDicePenalizeBorderLoss
(thresh=0.6, min_kept=0, ignore_index=-100, kernel_size=21, **_)[source]¶ Bases:
pywick.losses.OhemCrossEntropy2d
Combined OHEM (Online Hard Example Mining) process with BCE-Dice penalized loss
-
class
pywick.losses.
PoissonLoss
(bias=1e-12, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
PoissonLoss3d
(bias=1e-12, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
RecallLoss
(weight=None, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
- An unofficial implementation of
- <Recall Loss for Imbalanced Image Classification and Semantic Segmentation> Created by: Zhang Shuai Email: shuaizzz666@gmail.com recall = TP / (TP + FN)
- Args:
- weight: An array of shape [C,] predict: A float32 tensor of shape [N, C, *], for Semantic segmentation task is [N, C, H, W] target: A int64 tensor of shape [N, *], for Semantic segmentation task is [N, H, W]
- Return:
- diceloss
-
class
pywick.losses.
RMILoss
(num_classes=1, rmi_radius=3, rmi_pool_way=0, rmi_pool_size=3, rmi_pool_stride=3, loss_weight_lambda=0.5, lambda_way=1, device='cuda', **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
region mutual information I(A, B) = H(A) + H(B) - H(A, B) This version need a lot of memory if do not dwonsample.
-
forward_sigmoid
(logits_4D, labels_4D)[source]¶ Using the sigmiod operation both. Args:
logits_4D : [N, C, H, W], dtype=float32 labels_4D : [N, H, W], dtype=long
-
forward_softmax_sigmoid
(inputs, targets)[source]¶ Using both softmax and sigmoid operations. Args:
inputs : [N, C, H, W], dtype=float32 targets : [N, H, W], dtype=long
-
static
log_det_by_cholesky
(matrix)[source]¶ - Args:
- matrix: matrix must be a positive define matrix.
- shape [N, C, D, D].
- Ref:
- https://github.com/tensorflow/tensorflow/blob/r1.13/tensorflow/python/ops/linalg/linalg_impl.py
-
-
class
pywick.losses.
RMILossAlt
(with_logits, radius=3, bce_weight=0.5, downsampling_method='max', stride=3, use_log_trace=True, use_double_precision=True, epsilon=0.0005, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
PyTorch Module which calculates the Region Mutual Information loss (https://arxiv.org/abs/1910.12037).
-
class
pywick.losses.
RMIBCEDicePenalizeBorderLoss
(kernel_size=21, rmi_weight=1.0, bce_weight=1.0, **kwargs)[source]¶ Bases:
pywick.losses.RMILossAlt
Combined RMI and BCEDicePenalized Loss
-
class
pywick.losses.
SoftInvDiceLoss
(smooth=1.0, is_binary=True, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Well-performing loss for binary segmentation
-
class
pywick.losses.
SoftDiceLoss
(smooth=1.0, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
TverskyLoss
(alpha, beta, eps=1e-07, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Computes the Tversky loss [1]. Args:
param alpha: controls the penalty for false positives. param beta: controls the penalty for false negatives. param eps: added to the denominator for numerical stability. - Returns:
- tversky_loss: the Tversky loss.
- Notes:
- alpha = beta = 0.5 => dice coeff alpha = beta = 1 => tanimoto coeff alpha + beta = 1 => F beta coeff
- References:
- [1]: https://arxiv.org/abs/1706.05721
-
class
pywick.losses.
ThresholdedL1Loss
(threshold=0.5, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
WeightedSoftDiceLoss
(**_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
BDLoss
(is_binary: bool = False, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
L1Loss3d
(bias=1e-12, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
class
pywick.losses.
WingLoss
(width=5, curvature=0.5, reduction='mean', **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Used to enhance facial segmentation
-
class
pywick.losses.
BoundaryLoss
(theta0=19, theta=19, ignore_index=None, weight=None, is_binary: bool = False, **_)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
Boundary Loss proposed in: Alexey Bokhovkin et al., Boundary Loss for Remote Sensing Imagery Semantic Segmentation https://arxiv.org/abs/1905.07852