models package

models.resnet

Modified ResNet in pytorch

Uses 3x3 conv in first layer instead of 7x7 See our supplementary for exact details.

Original paper: Deep Residual Learning for Image Recognition

https://arxiv.org/abs/1512.03385v1

class models.resnet.BasicBlock(in_channels: int, out_channels: int, stride: int = 1)

Bases: torch.nn.modules.module.Module

Basic Block for ResNet-18 and ResNet-34

Parameters
  • in_channels (int) – input channels

  • out_channels (int) – output channels

  • stride (int) – the stride of the first block of this layer

expansion = 1
forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class models.resnet.BottleNeck(in_channels: int, out_channels: int, stride: int = 1)

Bases: torch.nn.modules.module.Module

Residual block for ResNet-50+ layers

Parameters
  • in_channels (int) – input channels

  • out_channels (int) – output channels

  • stride (int) – the stride of the first block of this layer

expansion = 4
forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class models.resnet.ResNet(block: Union[BasicBlock, BottleNeck], num_block: List[int], num_classes: int = 100, small_dense_density: float = 1.0, zero_init_residual: bool = True)

Bases: torch.nn.modules.module.Module

Modified ResNet in pytorch

Uses 3x3 conv in first layer instead of 7x7 See our supplementary for exact details.

Original paper: Deep Residual Learning for Image Recognition

https://arxiv.org/abs/1512.03385v1

Parameters
  • block (Union[BasicBlock, BottleNeck]) – Block type, Basic or Bottleneck

  • num_block (List[int]) – Block no’s.

  • num_classes (int) – No of output labels.

  • small_dense_density (float) – Equivalent parameter density of Small-Dense model

  • zero_init_residual (bool) – Whether to init batchnorm gamma to 0. Empirically achieves better performance, Improved residual training. Default 0.

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
models.resnet.resnet101()

return a ResNet-101 model

models.resnet.resnet152()

return a ResNet-152 model

models.resnet.resnet18()

return a ResNet-18 model

models.resnet.resnet34()

return a ResNet-34 model

models.resnet.resnet50()

return a ResNet-50 model

models.wide_resnet

class models.wide_resnet.BasicBlock(in_planes: int, out_planes: int, stride: int, dropRate: float = 0.0)

Bases: torch.nn.modules.module.Module

Wide Residual Network basic block

For more info, see the paper: Wide Residual Networks by Sergey Zagoruyko, Nikos Komodakis https://arxiv.org/abs/1605.07146

Parameters
  • in_planes (int) – input channels

  • out_planes (int) – output channels

  • stride (int) – the stride of the first block of this layer

  • dropRate (float) – Dropout Probability

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class models.wide_resnet.NetworkBlock(nb_layers: int, in_planes: int, out_planes: int, block: models.wide_resnet.BasicBlock, stride: int, dropRate: float = 0.0)

Bases: torch.nn.modules.module.Module

Wide Residual Network network block which holds basic blocks.

For more info, see the paper:

Wide Residual Networks by Sergey Zagoruyko, Nikos Komodakis https://arxiv.org/abs/1605.07146

Parameters
  • nb_layers (int) – Number of blocks

  • in_planes (int) – input channels

  • out_planes (int) – output channels

  • block (BasicBlock) – Block type, BasicBlock only

  • stride (int) – the stride of the first block of this layer

  • dropRate (float) – Dropout Probability

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class models.wide_resnet.WideResNet(depth: int = 22, widen_factor: int = 2, num_classes: int = 10, dropRate: float = 0.3, small_dense_density: float = 1.0)

Bases: torch.nn.modules.module.Module

Wide Residual Network with varying depth and width.

For more info, see the paper: Wide Residual Networks by Sergey Zagoruyko, Nikos Komodakis https://arxiv.org/abs/1605.07146

Parameters
  • depth (int) – No of layers

  • widen_factor (int) – Factor to increase channel width by

  • num_classes (int) – No of output labels

  • dropRate (float) – Dropout Probability

  • small_dense_density (float) – Equivalent parameter density of Small-Dense model

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool