Custom gradients during backpropagation

Hi everyone,

I was wondering what is the best way to scale the gradient before steping the weights during backpropagation. For example, lets say I have weight w, learning rate alpha and some custom function f(x). Before updating the weight, I want to scale the gradient based on my custom function f(x) as shown below. Thank you for your time and help.

image

If f(x) is a scalar you can just scale the loss function with its value before performing backpropagation. Detach the variable if f uses any tensors that require gradients.

(loss * f(x).detach()).bacward()

1 Like