Storing Convolutional Weights After pruning

Hi,
I have pruned convolutional weights groupwise like in condenseNet. Right now, the convolutional weights in each of the layer of my model are [[0.5, 0, 0, 0], [0,0.7,0,0], [0,0,0.25,0], [0,0,0,0.8]](groups = 4 is applied). I want to store the group convolutions weight as [[0.5, 0.7, 0.25, 0.8]] and store it in model.weight.dict(). So that i can deploy these weights back in normal convolutional model architecture with groups=4 to validate my pruning model. How to implement this storage operation on all convolutional layers of the model if i am using nn.sequential to store the features?

Hi,

If you use pytorch convolution with the group option, then the weights will be stored in the way you want.
If your original convolution was dense, I would says you can create a new group convolution, copy the weights in it and replace the dense convolution by the group one in your model. Then the regular saving/forwarding will work with the groups out of the box.

Hi alban, I did not understand “If your original convolution was dense, I would says you can create a anew group convolution, copy the weights in it and replace the dense convolution by the group one in your model”.

I will try to give an example of what i am trying to do, Let’s say I am training the Dense Convolution MobileNet Model

“”"
class mobilenet(nn.Module):
def init(self, num_classes=10, _groups=1):
super(mobilenet, self).init()

    def conv_bn(inp, oup, stride):
        return nn.Sequential(
            nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
            nn.BatchNorm2d(oup),
            nn.ReLU(inplace=True)
        )

    def conv_dw(inp, oup, stride, _groups=2):
        return nn.Sequential(
            nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
            nn.BatchNorm2d(inp),
            nn.ReLU(inplace=True),

            nn.Conv2d(inp, oup, 1, 1, 0, groups=_groups, bias=False),
            nn.BatchNorm2d(oup),
            nn.ReLU(inplace=True),
        )

    self.model = nn.Sequential(
        conv_bn(  3,  32, 1), 
        conv_dw( 32,  64, 1, _groups),
        conv_dw( 64, 128, 1, _groups),
        conv_dw(128, 128, 1, _groups),
        conv_dw(128, 256, 2, _groups),
        conv_dw(256, 256, 1, _groups),
        conv_dw(256, 512, 2, _groups),
        conv_dw(512, 512, 1, _groups),
        conv_dw(512, 512, 1, _groups),
        conv_dw(512, 512, 1, _groups),
        conv_dw(512, 512, 1, _groups),
        conv_dw(512, 512, 1, _groups),
        conv_dw(512, 1024, 2, _groups),
        conv_dw(1024, 1024, 1, _groups),
        nn.AvgPool2d(4),
    )
    self.fc = nn.Linear(1024, num_classes)

def forward(self, x):
    x = self.model(x)
    #print("The shape of the intermediate feature is:", x.shape)
    x = x.view(-1, 1024)
    x = self.fc(x)
    #print("Returning the output size:", x.size())
    return x

“”"

I have done group pruning like condensenet and store all the convolutional weights. Let’s say groups=4, So due to pruning my initial convolutional weights [[0.5, 0.7, 0.8, 0.3], […], […], […]] has become [[0.8, 0, 0, 0], [0, 0.9, 0, 0], [0, 0, 0.3, 0], [0, 0, 0, 0.5]]

let’s say I have function which takes input as cnn.weight.data and groups and returns output as cnn_group.weight.data.

[[0.8, 0, 0, 0], [0, 0.9, 0, 0], [0, 0, 0.3, 0], [0, 0, 0, 0.5]] --> [0.8, 0.9, 0.3, 0.5]
“”"

I have to run the function across all convolutional layers and store weights in each layer and upload those weights to the mobilenet model with groups=2
I stored weights like this
for i, j in enumerate(model):
if isinstance(i, nn.Conv2d):
_dict[i] = convert_wt(i, groups)

How to upload this dict to each convolutional layer.

Which convolutions from your model do you want to modify?
The ones declared as nn.Conv2d(inp, oup, 1, 1, 0, groups=inp, bias=False) already works with groups along each input.
I guess it is the ones defined as nn.Conv2d(inp, oup, 1, 1, 0, groups=_groups, bias=False) that you want to compress as in your code _groups=1? If so, after compression, you could replace this conv by a new one with paramters nn.Conv2d(inp, oup, 1, 1, 0, groups=nb_groups, bias=False) and copy the weights into it. Then replace it in the Sequential. For example, for the first one by doing self.model[1][3] = your_new_conv

Hi albanD,
Thanks for the info. I tried to copy the weights but weights are not getting copied to second model.

for m, n in zip(Model.modules(), Net.modules()):
if isinstance(m, pointwise_groupConv2d):
# Since Both Model and Net have same layers
attrs = vars(m)
attrs1 = vars(n)
#print(“Convolutional weights attributes”,m,n, m.weight.data.size(),n.weight.data.size(), attrs.keys(), attrs1.keys())
#print(m.weight.data)
n.weight.data = Remove_grouped_wt(m.weight.data, groups=2)
else:
#print("Harsha A ",m,n)
if isinstance(m, nn.Conv2d):
# copy the convolutional layer Net model weights and bias
n.weight.data = m.weight.data
#n.bias.data = m.bias.data

        elif isinstance(m, nn.BatchNorm2d):
            # copy the BatchNorm layer Net model weights and bias
            n.weight.data = m.weight.data 
            n.bias.data = m.bias.data

        elif isinstance(m, nn.Linear):
            # copy the Linear layer 
            n.weight_data = m.weight.data
            n.bias.data = m.bias.data

What is the mistake i am doing?

You should use the copy_ method: n.weight.data.copy_(m.weight.data).