Modifying convolution weights in the forward pass

I would like to modify the weights of a convolution operation before the convolution operation on the input. For example -

conv = torch.nn.Conv2d(in_ch, out_ch, 3, 1, 1)
conv.weight.data = conv.weight * values
out = conv(x)

In the above code, ‘x’ is the input and the convolutional weights are modified using the ‘.data’ attribute. ‘values’ is a tensor of the size of kernel_size x kernel_size which in the above example is 3 x 3. I’ve read from multiple sources how using the ‘.data’ attribute is deprecated and shouldn’t be used since it leads to incorrect gradient calculations. I would like to know the right way to modify the convolutional weights.

Wrap the weight manipulation into a with torch.no_grad() context to explicitly disable Autograd for this operation instead of manipulating the .data attribute.

1 Like

Thank you for your reply @ptrblck. What if ‘values’ are being generated by another sub-network and thus gradient flow is necessary through ‘values’ as well? In that case, will torch.no_grad() work?

No, it won’t work. In this case you might want to use the functional API via:

out = F.conv2d(x, conv.weight*values, ...)

instead of manipulating the weight of the conv layer.

1 Like

Ok, I was wondering the same (to use the functional API). Seems like there’s no way to use it in the nn.Conv2d formulation anymore. Thank you.