Hi everyone,
I believe this has been answered implicitly but not explicitly elsewhere.
Do PyTorch optimizers minimize or maximize a loss function?
Thank you.
Hi everyone,
I believe this has been answered implicitly but not explicitly elsewhere.
Do PyTorch optimizers minimize or maximize a loss function?
Thank you.
Optimizers subtract the gradient of all passed parameters using their .grad
attribute as seen here. Thus you would minimize the loss using gradient descent.