How much does 'optimizer object' depends upon the 'Loss object'?

  1. Does the optimizer_object anyhow depends upon the Loss_object, except that it wants loss.backward() to be computed before the optimizer.step()?

  2. I am creating a custom_Loss_object every batch iteration (and pass it a parameter). Does it effects optimizer anyhow as compared to when I create the custom_Loss_object only once (before iteration loop) and use it in every iteration?

(I guess no for both the questions.)

Hello Deepak!

As you surmise, the answers are “no” and “no.”

An easy way to see this is to remember that you never actually
need to create an instance of a loss object. For example, instead
of instantiating torch.nn.CrossEntropyLoss (and calling it as a
callable function object), you can simply call the function
torch.nn.functional.cross_entropy().

Or you can calculate your loss directly with pytorch tensor operations:
for example, MSELoss (mse_loss()) can be implemented as:

loss = ((target - input)**2).mean()

No loss objects needed, so the optimizer can’t be depending on a
loss object.

Best.

K. Frank

Thank you KFrank, your second logic seems more convincing to me.