Hi,
I’m trying to modify my loss function to add a custom regulizer on the weights of the neural network. Right now, I just have a series of linear layers, but I want to do elementwise mulitplication and division with the weights of those layers in the loss function and I want the differentiation to account for this. I tried doing this by doing torch.mul(layer.weight.data, constant), but this doesn’t seem to account that computation in the gradient for the weight. How would I do this same manipulation, but have the calculation included in autograd?