How gradient is back propagated in WGAN-GP if D set to false?

How is the gradient is backpropogated from the critic to the generator if we set

for p in netD.parameters():
        p.requires_grad = False  # to avoid computation

in the generator training?
in this example

That just tells pytorch not to calculate the gradients with respect to the parameters of D. The gradient still flows back via the inputs of D to the parameters of G.

isn’t the gradient of G dependent on the gradient flow from the D ?

Yes, but it only depends on the gradient with respect to the input of D, it does not depend on the gradient with respect to the parameters of D.