Shortcuts

CyCADA: Cycle-Consistent Adversarial Domain Adaptation

class dalib.translation.cycada.SemanticConsistency(ignore_index=(), reduction='mean')[source]

Semantic consistency loss is introduced by CyCADA: Cycle-Consistent Adversarial Domain Adaptation (ICML 2018)

This helps to prevent label flipping during image translation.

Parameters
  • ignore_index (tuple, optional) – Specifies target values that are ignored and do not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: ().

  • reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'

Shape:
  • Input: \((N, C)\) where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

  • Target: \((N)\) where each value is \(0 \leq ext{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

  • Output: scalar. If reduction is 'none', then the same size as the target: \((N)\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.

Examples:

>>> loss = SemanticConsistency()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()

Docs

Access comprehensive documentation for Transfer Learning Library

View Docs

Tutorials

Get started for transfer learning

Get Started