Shortcuts

Modules

Classifier

class common.modules.classifier.Classifier(backbone, num_classes, bottleneck=None, bottleneck_dim=-1, head=None, finetune=True, pool_layer=None)[source]

A generic Classifier class for domain adaptation.

Parameters
  • backbone (torch.nn.Module) – Any backbone to extract 2-d features from data

  • num_classes (int) – Number of classes

  • bottleneck (torch.nn.Module, optional) – Any bottleneck layer. Use no bottleneck by default

  • bottleneck_dim (int, optional) – Feature dimension of the bottleneck layer. Default: -1

  • head (torch.nn.Module, optional) – Any classifier head. Use torch.nn.Linear by default

  • finetune (bool) – Whether finetune the classifier or train from scratch. Default: True

Note

Different classifiers are used in different domain adaptation algorithms to achieve better accuracy respectively, and we provide a suggested Classifier for different algorithms. Remember they are not the core of algorithms. You can implement your own Classifier and combine it with the domain adaptation algorithm in this algorithm library.

Note

The learning rate of this classifier is set 10 times to that of the feature extractor for better accuracy by default. If you have other optimization strategies, please over-ride get_parameters().

Inputs:
  • x (tensor): input data fed to backbone

Outputs:
  • predictions: classifier’s predictions

  • features: features after bottleneck layer and before head layer

Shape:
  • Inputs: (minibatch, *) where * means, any number of additional dimensions

  • predictions: (minibatch, num_classes)

  • features: (minibatch, features_dim)

property features_dim

The dimension of features before the final head layer

get_parameters(base_lr=1.0)[source]

A parameter list which decides optimization hyper-parameters, such as the relative learning rate of each layer

Regressor

class common.modules.regressor.Regressor(backbone, num_factors, bottleneck=None, bottleneck_dim=-1, head=None, finetune=True)[source]

A generic Regressor class for domain adaptation.

Parameters
  • backbone (torch.nn.Module) – Any backbone to extract 2-d features from data

  • num_factors (int) – Number of factors

  • bottleneck (torch.nn.Module, optional) – Any bottleneck layer. Use no bottleneck by default

  • bottleneck_dim (int, optional) – Feature dimension of the bottleneck layer. Default: -1

  • head (torch.nn.Module, optional) – Any classifier head. Use nn.Linear by default

  • finetune (bool) – Whether finetune the classifier or train from scratch. Default: True

Note

The learning rate of this regressor is set 10 times to that of the feature extractor for better accuracy by default. If you have other optimization strategies, please over-ride get_parameters().

Inputs:
  • x (tensor): input data fed to backbone

Outputs:
  • predictions: regressor’s predictions

  • features: features after bottleneck layer and before head layer

Shape:
  • Inputs: (minibatch, *) where * means, any number of additional dimensions

  • predictions: (minibatch, num_factors)

  • features: (minibatch, features_dim)

property features_dim

The dimension of features before the final head layer

get_parameters(base_lr=1.0)[source]

A parameter list which decides optimization hyper-parameters, such as the relative learning rate of each layer

Docs

Access comprehensive documentation for Transfer Learning Library

View Docs

Tutorials

Get started for Transfer Learning Library

Get Started