Normalising Bank of convolutional filters

Hello everyone,
I’m new to pytorch…
I’m trying to normalize a bank of convolutional filters by their L2-norm (like if every filter was a vector) as in
Basically dividing every matrix for a constant.
This is my code:

class mymodel(nn.Module):

    def __init__(self):
        super(mymodel, self).__init__()
        self.W = 16

        self.filters = nn.Parameter(torch.randn(self.N, self.C_in, self.W, self.H), requires_grad=True)

    def normalize_filters(self):

    def forward(self, *input):

        #and following stuff

But when I call normalize_filters() I get the error:

RuntimeError: The expanded size of the tensor (1) must match the existing size (64) at non-singleton dimension 1.

I don’t know if there’s a better solution, I think torch.norm only works for vectors…