Multiple loss, multiple backwards, multiple optimizers

I have three models:
M1 gives the output(OUT1) to (IN2) M2
M1.requires_grad=False
M2.requires_grad=True
M2 uses the output of M1, and I used OUT1.detach() to stop backpropagtion
Then I backward M2
I use the output of M2 for M3, and again with .detach()
Then I calculated Loss1, and backpropagte M1, based on output of M2 and M3.

Everything works and it is fine.

Now, I want to do a backward through all of the M1>M2>M3, and I don’t want to use detach(), but it returns different errors, like is at version 3; expected version 2 instead.
I expected something like this:

M1.requires_grad(True)
M2.requires_grad(False)
M3.requires_grad(False)
M1>OUT1

M1.requires_grad(False)
M2.requires_grad(True)
M3.requires_grad(False)
OUT1>M2>OUT2
calculate loss2
backpropagate(loss2)
optim2.step()

M1.requires_grad(False)
M2.requires_grad(False)
M3.requires_grad(True)
OUT2>M3>OUT3
calculate Loss3
backpropagate(Loss3)
optim3.step()

M1.requires_grad(True)
M2.requires_grad(False)
M3.requires_grad(False)
loss1=f(OUT2,OUT3,OUT1, loss2,loss3)
backpropagete(loss1)
optim1.step()

So in here, I want to have multiple stage losses. I want to keep model1, and its out1, out of backwards of M2 and M3, but I want out1, inside of their calculation, and at the end, I want to have a loss that goes through M3 and M2 (but without affecting their parameters), but I want loss1 would be able to see how it can affect loss2, loss3, and only affect M1. e.g. I want M2 works on its own loss2 to lower it, and I want loss1 see how it can change loss2 and loss 3.