Changing layer size without declaring it again

I am working on a project where I need to remove some of the filters of a layer. One can do this by reinitializing the layer, but this would require to set the parameters for different type of layers correctly. Therefore I am looking for a solution where I can generically do that. I have the following solution.

from torch.nn import Parameter

model = torch.load('3of8.mod')
print(model.conv1) #Conv2d (1, 8, kernel_size=(5, 5), stride=(1, 1))
print(model.conv2) #Conv2d (8, 16, kernel_size=(5, 5), stride=(1, 1))
print(model(Variable(torch.rand(1,1,28,28))).size()) #torch.Size([1, 10])

model.conv1.weight = Parameter(model.conv1.weight.data[(0,2,7),])
model.conv1.bias = Parameter(model.conv1.bias.data[(0,2,7),])
model.conv1.out_channels = 3
model.conv2.weight =  Parameter(model.conv2.weight.data[:,(0,2,7),:,:])
model.conv2.in_channels = 3

print(model.conv1) #Conv2d (1, 3, kernel_size=(5, 5), stride=(1, 1))
print(model.conv2) #Conv2d (3, 16, kernel_size=(5, 5), stride=(1, 1))

print(model(Variable(torch.rand(1,1,28,28))).size()) #torch.Size([1, 10])

Is there anyhting wrong with this approach? Does anyone know a better solution? Thanks

If all Conv2d class attrs are correctly set, then this will be fine. However, when you need flexibility like this, I recommend using the functional form where you can manually pass in the weight and bias tensors (http://pytorch.org/docs/master/nn.html#torch.nn.functional.conv2d)

1 Like