Hi,
Suppose after feeding a 224x224x3 image into a backbone network, the output of last conv layer is 7x7x1280, and I have a well-trained two layers MLPClassifier which has 1280x100
weights and 100x6
weights, how can I convert these two weights to do fully convolution on 7x7x1280
feature so I can get 7x7x100
and then 7x7x6
. I know I have to reshape these two weights into some 1x1
convolutional kernel but I don’t know how?
Should it be something like this?
self.conv1 = nn.Conv2d(1280, 100, kernel_size = 1, stride = 1, padding = 0)
self.conv1.weight.data = a 1x1x1280x100 array?
self.conv2 = nn.Conv2d(100, 6, kernel_size = 1, stride = 1, padding = 0)
self.conv2.weight.data = a 1x1x100x6 array?