Torch optimizer related issues

Write the loss function and backpropagation code yourself, and then do not call the loss function packaged by pytorch, nor the backpropagation function that encapsulates the loss function, write the backpropagation function yourself, that is, calculate the gradient by yourself. In this case, can the optimizer packaged by torch be used to update the parameters?

I would think, yes.
Optimizer uses the .grad attribute of the optimizable parameters passed to it during initialization to update the parameter values.

So as long as .grad is populated, it should work.

Thank you for your answer, but I still have two questions to ask:

The first question: For example, after I perform a forward propagation, I perform the back propagation, calculate the parameter gradient I need, and then pass the calculated parameter gradient to the optimizer, right?

The second question: how do I pass the gradient of the parameters to the optimizer? !I only see a list of parameters that can be passed that need to be optimized, so how do I pass the gradient?

No. You will need to store the calculated gradients in the .grad attributes of the respective parameters and then call optimizer.step() to be able to update the parameters using their grad.

If I may ask, is there a specific need to implement your own backward pass and not use PyTorch’s automatic differentiation via autograd?

Yes, I implemented backpropagation by myself without using pytorch automatic differentiation, so in this case, can I still call the optimizer packaged by pytorch?