Negative or positive loss when using autograd.grad

Hello. I have a loss function. When the calculated value gets larger, the result is better. Do I need to make the value negative or remain positive when use autograd.grad?

loss = my_loss(input) # when value gets larger, the result should be better
# do I need to use loss * -1 to make it negative?
grad += autograd.grad(loss.sum(), input)[0]

Thanks!

Hi @Belial,

When you use gradient descent you go against the gradient, so if you’re trying to minimize a loss you would do something like,

loss = my_loss(input)

optim.zero_grad()
loss.backward()
optim.step()

However, if you’re trying to maximize a loss you need to take the negative of the loss in order to maximize the loss because minimizing a negative is equivalent to maximizing a positive. You’ll want something like this (assuming you’re using an optim class),

loss = my_loss(input)
loss = -1. * loss #out-of-place (do NOT use *= ) we're now maximizing instead

optim.zero_grad()
loss.backward()
optim.step()

Sorry for my ambiguity. I am not using optim class, I only use autograd.grad to calculate the gradient.
Does the same rule stiil apply?

The same rule applies if you’re using torch.autograd.grad.

Also check if your loss should be loss.sum() or loss.mean(). If you’re minimizing some statistical expectation it should be mean().

1 Like

ok. I will take a look. Thanks!