Using retrain graph=True in optimizing different parameters using different losses

I want to optimize N different input parameters, z_i, (all fed into the same network). (similar to this thread)
I have defined N different losses L_i and optimizers_i corresponding to each parameter.

for i=1:N:
optimiser[i].zero_grad()
loss[i].backward(retain_graph=True)
optimiser[i].step()

is it necessarily to use “retain_graph=True” every time we call loss[i].backwards()?