Will backward compute gradients of the target?

I’m implementing DDPG, where the target is computed from a target network. I define the mean squared loss function as below,

loss = F.mse_loss(self.critic_main(states, actions), target)

however, I don’t know whether loss.backward() will compute the gradient the loss of parameters in target. Should I call detach on target in advance to avoid redundant computation?