Manually setting out_features and in_features in fully connected layers

I learnt of this functionality. For example, we have a VGG16 model:

import torchvision.models as models
model=models.vgg16()
model._modules['classifier'][6] = 1
Sequential(
  (0): Linear(in_features=25088, out_features=4096, bias=True)
  (1): ReLU(inplace)
  (2): Dropout(p=0.5)
  (3): Linear(in_features=4096, out_features=4096, bias=True)
  (4): ReLU(inplace)
  (5): Dropout(p=0.5)
  (6): Linear(in_features=4096, out_features=1, bias=True)
)

It turns out you can manipulate the number of input and output features, e.g.

vgg._modules[‘classifier’][6].in_features=5
vgg._modules[‘classifier’][6].out_features=1

  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace)
    (2): Dropout(p=0.5)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace)
    (5): Dropout(p=0.5)
    (6): Linear(in_features=5, out_features=1, bias=True)
  )

I don’t quite understand how this thing works. If I reduce 1000 neurons to 1, which one is that? And what happens to all the weights? I realize with in_features=5 this wouldn’t work, but with out_features=1 this would. But how?

Your code shouldn’t work, as you are trying to assign an int as a child module.
Anyway, changing the out_features won’t have any effect after the layer was initialized:

model = models.resnet50().eval()
x = torch.randn(1, 3, 224, 224)

out1 = model(x)

model.fc.out_features = 1

out2 = model(x)

print(out1.shape, out2.shape)
> torch.Size([1, 1000]) torch.Size([1, 1000])
print((out1 == out2).all())
> tensor(True)

It’s a type, I meant of course

model._modules['classifier'][6].out_features = 1

So can I change both the number of out_features or in_features and the resize the weights matrix accordingly?

You could reassign a new matix with your desired shape.
Changing the in_* and out_features will not change the underlying weight parameter.