Add custom computation to gradient calculation of loss function

Hi,

I’m trying to modify my loss function to add a custom regulizer on the weights of the neural network. Right now, I just have a series of linear layers, but I want to do elementwise mulitplication and division with the weights of those layers in the loss function and I want the differentiation to account for this. I tried doing this by doing torch.mul(layer.weight.data, constant), but this doesn’t seem to account that computation in the gradient for the weight. How would I do this same manipulation, but have the calculation included in autograd?

Hi,

You should never use .data when using pytorch anymore.
In this case, it will break the graph and prevent gradient to be properly computed. Just remove the .data.

This makes me feel stupid, thanks!