unstable WGAN-GP gradients

I do not understand why, when I run the code with the network wgan-gp, one time my generator receives super gradients and the network is stable, I can see the improvement from the generator and the critic. and another time when I run the code it is practically from the very beginning that the generator does not receive gradients and does not learn. is it possible to make it stable? so that the network learns every time I run it and not so randomly?

GANs are inheritently unstable.

But, since you’re using Wasserstein loss already(i.e. WGAN), sounds to me like the learning rate on your optimizer might be set too high. Try lowering that and see if it helps.