Is there a way of accessing grad from other parameters in torch.optim.Optimiter.step() function?

Say I have 2 parameters W1 and W2, for some reason when updating W1 using the TORCH.OPTIM.OPTIMIZER.STEP function, I have to use the grad from W2, when usually you can only loop through the parameters registered with optimizer.
So I’m wondering is there a way doing this by creating a customized optimizer module myself?

Yes, you should be able to implement your own custom optimizer and could take a look at e.g. sgd as a template.

Hi! Thanks for the reply.
I’m wondering how would it be a good practice if I want to split all parameters into 3 sets, A, B and C.
A’s normally updated and B’s gradients is grad_B + grad_C, C is not updated? How do you iterate them accordingly?