Training sub-modules with different losses

I want to better understand PyTorch computational graphs. When declaring multiple optimizers sending in only a layer or a module, and then backpropagating through multiple losses in the following way, is each layer/module updated only based on its corresponding backward loss call?

self.optimizer_critic = torch.optim.Adam(self.acmodel.critic.parameters(), lr, eps=adam_eps)
self.optimizer_vae = torch.optim.Adam(self.acmodel.vae.parameters(), lr, eps=adam_eps)
self.optimizer_mdnrnn = torch.optim.Adam(self.acmodel.mdnrnn.parameters(), lr, eps=adam_eps)

self.optimizer_critic.zero_grad()
batch_loss.backward(retain_graph=True)
self.optimizer_critic.step()
                

self.optimizer_mdnrnn.zero_grad()
batch_mdnrnn_loss.backward(retain_graph=True)
self.optimizer_mdnrnn.step()

            

self.optimizer_vae.zero_grad()
batch_vae_loss.backward(retain_graph=True)
self.optimizer_vae.step()