ModuleValidator.fix() causes gradients to be None

After I replacing BatchNorm by GroupNorm with ModuleValidator.fix(), and the gradients of parameters become None when I get them from model.named_parameters(). The grad_sample of parameters are also None. So the gradient is not flowing to the replaced GroupNorm weights when running backward pass.

Could you explain what ModuleValidator.fix() does as it seems to be a custom method?

It’s a method from Opacus Opacus · Train PyTorch models with Differential Privacy