How to modify filter gradients using register_hook having separate update function for each filter gradient

I am wondering how I can use register_hook to modify filters gradients for the convolutional layer.

Say my convo layer has 10 filters, 3 channels, 5x5 --> (10, 3, 5, 5).
Each 32x32 filter should have its own gradient e.g G.
I have the corresponding tensor of the size (10, 3, 5, 5) where 5x5 matrix M is a matrix that I want to multiply filter’s channels gradient e.g MxG and use this as a gradient. Note M is different for each filter.

For fully connected network I simply did
model.layer.weight.register_hook(lambda grad: torch.t(torch.mm(M, torch.t(grad))))

How do I do it for convolutional filters?
lambda grad will iterate over (3, 5, 5) tensors. How do I update my gradient in this case…?

If your already have a tensor in the shape [10, 3, 5, 5], where the 5x5 tensors correspond to the mask, you can simply multiply the gradient with this tensor. There is no need to loop over each mask.

1 Like