Normalizing weights of convolution filters


I would like to normalize ( L1 norm ) the weights of my convolutional filters. Is it possible to do this for torch.nn.Conv2d ? If so, would you have some hints on the “path” I should take ?

Thank you

1 Like

It might be as simple as that:

import torch
import torch.nn as nn

class NormedConv( nn.Conv2d ):

    def normalize( self ):
        with torch.no_grad():
            for f in range( 0, self.weight.shape[0] ):
                s                    = torch.sum( self.weight[f,:,:,:] )
                self.weight[f,:,:,:] = torch.div( self.weight[f,:,:,:], s )

I have tried the similar approach :

for g in range(0, self.weight.shape[0]):
            for k in range(0, self.weight.shape[1]):
                mid_x, mid_y = int(
                    self.weight.shape[3]/2), int(self.weight.shape[2]/2)
                s = torch.sum(self.weight[g, k, :, :])
                self.weight[g, k, mid_y, mid_x] = self.weight[g, k, mid_y, mid_x] - s

when I started the training loop I receive this error:

File “train/”, line 127, in
opt = torch.optim.Adadelta(model.parameters(),weight_decay=1e-6)
File “/home/mohamed/anaconda3/envs/cvPyt/lib/python3.6/site-packages/torch/optim/”, line 36, in init
super(Adadelta, self).init(params, defaults)
File “/home/mohamed/anaconda3/envs/cvPyt/lib/python3.6/site-packages/torch/optim/”, line 51, in init
File “/home/mohamed/anaconda3/envs/cvPyt/lib/python3.6/site-packages/torch/optim/”, line 202, in add_param_group
raise ValueError(“can’t optimize a non-leaf Tensor”)
ValueError: can’t optimize a non-leaf Tensor

any help?