VGG-16 pretrained network with half the channel size

Hello,

Is it possible to load a pretrained VGG-16 model, but would like to have a smaller network in terms of width? ie the number of channels are half of the VGG network. For example a VGG16 network is:

(0): Conv2d(3, **64**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(**64**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(**64**, **64**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(**64**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(7): Conv2d(**64**, **128**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): BatchNorm2d(**128**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(9): ReLU(inplace)
(10): Conv2d(**128**, **128**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): BatchNorm2d(**128**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(12): ReLU(inplace)
(13): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(14): Conv2d(**128**, **256**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): BatchNorm2d(**256**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(16): ReLU(inplace)

I would like the pretrained network to have channels as follows:

(0): Conv2d(3, **32**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(**32**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(**32**, **32**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(**32**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(7): Conv2d(**32**, **64**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): BatchNorm2d(**64**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(9): ReLU(inplace)
(10): Conv2d(**64**, **64**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): BatchNorm2d(**64**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(12): ReLU(inplace)
(13): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(14): Conv2d(**64**, **128**, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): BatchNorm2d(**128**, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(16): ReLU(inplace)

Any suggestions would be appreciated.
Thank you in advance!

You could reduce the number of channels manually using the pretrained model, but would need to come up with a strategy, how these filters should be reduced.
Each weight tensor (the kernel or filter) in an nn.Conv2d layer has 4 dimensions defined as [out_channels=num_filters, in_channels, height, width].
While you could reduce the number of filters in e.g. the first layer by using e.g. only the first 32 filters, the filters of the following layer would also need to change their in_channel dimension. You could of course also slice it, but there might also be other (better) methods to reduce these numbers.