Hi,
I am defining a custom activation function, which inherits from Function class and has static methods forward() & backward(). If I call F.relu() inside forward(), will it incur autograd (which is undesirable)?
Thank you!
Hi,
I am defining a custom activation function, which inherits from Function class and has static methods forward() & backward(). If I call F.relu() inside forward(), will it incur autograd (which is undesirable)?
Thank you!
Could you check if wrapping your call to F.relu()
inside a with torch.no_grad()
: would work?
@eqy’s suggestion would also work, but by default gradient calculation is disabled in custom autograd.Functions
and you could check it via print(torch.is_grad_enabled())
which will return False
inside the forward
and backward
methods.