Separate Optimizer

What is the criteria that separate optimizers?
What I want to build is :

Encoder A -> Classifier A
Encoder A -> Classifier B

I want to make Classifier B backpropagate only to Encoder A (not to Classifier A.)
then this optimizer can work? or should I separate optimizers?

   optimizer = optim.RMSprop(
        list(encoderA.parameters()) +
        list(classifierA.parameters()) +
        list(classifierB.parameters()),
        lr=1e-3)

The optimizer will update all passed parameters.
However, the optimizer does not define how the gradients are being calculated.

The computation graph is defined during your forward pass. If you call .backward() on the loss (or any other tensor), the Autograd will backpropagate using the computation graph to calculate the gradients.

E.g. if you calculate the loss using the output of Classifier B, loss.backward() will calculate the gradients of the parameters in Classifier B as well as Encoder A (, since Classifier A was never touched to calculate this particular loss).