Checking my understanding of detach method

If I had a network which output two values a and b

a, b = model(x)
loss = ((a - y) ** 2).mean() # part one
loss += (a.detach() * b).mean() # part two

In part one, the mean squared error would only have an effect on the parameters with respect to the output variable a

In part two of the loss function, it would only affect the parameters with respect to the output variable b

So in effect I can define a loss function which depends on an output variable, but where the network will only optimize for the attached variable. Is this correct, or did I miss something?