Normalising Bank of convolutional filters

Hello everyone,
I’m new to pytorch…
I’m trying to normalize a bank of convolutional filters by their L2-norm (like if every filter was a vector) as in https://arxiv.org/abs/0706.3177.
Basically dividing every matrix for a constant.
This is my code:

class mymodel(nn.Module):

    def __init__(self):
        super(mymodel, self).__init__()
        self.N=64
        self.C_in=1
        self.W=16
        self.W = 16

        self.filters = nn.Parameter(torch.randn(self.N, self.C_in, self.W, self.H), requires_grad=True)


    def normalize_filters(self):

        norm=self.filters.data.pow(2).sum(2).sum(2).pow(1/2)
        self.filters.data=self.filters.data.div(norm.expand_as(self.filters.data))

    def forward(self, *input):

        self.normalize_filters()
        #and following stuff

But when I call normalize_filters() I get the error:

RuntimeError: The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 1.

I don’t know if there’s a better solution, I think torch.norm only works for vectors…