Hello. I have a loss function. When the calculated value gets larger, the result is better. Do I need to make the value negative or remain positive when use autograd.grad?
loss = my_loss(input) # when value gets larger, the result should be better
# do I need to use loss * -1 to make it negative?
grad += autograd.grad(loss.sum(), input)[0]
When you use gradient descent you go against the gradient, so if you’re trying to minimize a loss you would do something like,
loss = my_loss(input)
optim.zero_grad()
loss.backward()
optim.step()
However, if you’re trying to maximize a loss you need to take the negative of the loss in order to maximize the loss because minimizing a negative is equivalent to maximizing a positive. You’ll want something like this (assuming you’re using an optim class),
loss = my_loss(input)
loss = -1. * loss #out-of-place (do NOT use *= ) we're now maximizing instead
optim.zero_grad()
loss.backward()
optim.step()