Is torchvision's mobilenet_v3_large model different from the mobilenetv3_large on original paper?

Is torchvision’s mobilenet_v3_large model different from the mobilenet_v3_large on original paper?

  • To me, it seems like 1x1 conv in the paper is replaced with fully connected layers in torchvision’s mobilenet_v3_large.
  • Below is the model structure on torchvision’s pretrained mobilenet_v3_large
from torchvision import models
model = models.mobilenet.mobilenet_v3_large(pretrained=True)
print(dir(model))
{'training': True, '_parameters': OrderedDict(), '_buffers': OrderedDict(), '_non_persistent_buffers_set': set(), '_backward_hooks': OrderedDict(), '_is_full_backward_hook': None, '_forward_hooks': OrderedDict(), '_forward_pre_hooks': OrderedDict(), '_state_dict_hooks': OrderedDict(), '_load_state_dict_pre_hooks': OrderedDict(), '_modules': OrderedDict([('features', Sequential(
  (0): ConvBNActivation(
    (0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
    (2): Hardswish()
  )
  (1): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False)
        (1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (1): ConvBNActivation(
        (0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (2): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (1): ConvBNActivation(
        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False)
        (1): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (2): ConvBNActivation(
        (0): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(24, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (3): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (1): ConvBNActivation(
        (0): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=72, bias=False)
        (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (2): ConvBNActivation(
        (0): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(24, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (4): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (1): ConvBNActivation(
        (0): Conv2d(72, 72, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=72, bias=False)
        (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(40, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (5): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (1): ConvBNActivation(
        (0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False)
        (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(120, 32, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(32, 120, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(40, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (6): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (1): ConvBNActivation(
        (0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False)
        (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(120, 32, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(32, 120, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(40, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (7): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(240, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(240, 240, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=240, bias=False)
        (1): BatchNorm2d(240, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): ConvBNActivation(
        (0): Conv2d(240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (8): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(80, 200, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(200, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(200, 200, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=200, bias=False)
        (1): BatchNorm2d(200, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): ConvBNActivation(
        (0): Conv2d(200, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (9): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False)
        (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): ConvBNActivation(
        (0): Conv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (10): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False)
        (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): ConvBNActivation(
        (0): Conv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (11): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(480, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False)
        (1): BatchNorm2d(480, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(480, 120, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(120, 480, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(112, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (12): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(672, 672, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=672, bias=False)
        (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(672, 168, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(168, 672, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(112, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (13): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(672, 672, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=672, bias=False)
        (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(672, 168, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(168, 672, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(672, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(160, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (14): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=960, bias=False)
        (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(960, 240, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(240, 960, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(160, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (15): InvertedResidual(
    (block): Sequential(
      (0): ConvBNActivation(
        (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (1): ConvBNActivation(
        (0): Conv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=960, bias=False)
        (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Hardswish()
      )
      (2): SqueezeExcitation(
        (fc1): Conv2d(960, 240, kernel_size=(1, 1), stride=(1, 1))
        (relu): ReLU(inplace=True)
        (fc2): Conv2d(240, 960, kernel_size=(1, 1), stride=(1, 1))
      )
      (3): ConvBNActivation(
        (0): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(160, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (2): Identity()
      )
    )
  )
  (16): ConvBNActivation(
    (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
    (2): Hardswish()
  )
)), ('avgpool', AdaptiveAvgPool2d(output_size=1)), ('classifier', Sequential(
  (0): Linear(in_features=960, out_features=1280, bias=True)
  (1): Hardswish()
  (2): Dropout(p=0.2, inplace=True)
  (3): Linear(in_features=1280, out_features=1000, bias=True)
))])}

Mobilenetv3_large on original paper

Screen Shot 2021-12-05 at 2 18 34 AM

  • But it seems like conv2d 1x1, NBN of input channel of 960 and output channel of 1280 in the paper is missing.
  • conv2d 1x1, NBN of input channel of 1280 and output channel of num_class in the paper also seems to be missing…
  • Instead, on the place of convolutional layers, Linear(in_features=960, out_features=1280, bias=True) and Linear(in_features=1280, out_features=1000, bias=True) exist.

If that is the case, what is the purpose of using fully connected layers instead of 1x1 convolutions?

You can replace a conv layer with a kernel size of 1x1 with a linear layer, so both operations should yield the same results.

I see! Thank you for the clarification :hugs: