Adam optimizer exp_avg_sq turns to infinity

After upgrading from 1.1.0 to 1.3.1 I started getting some weird problems during training related to ADAM optimizer.
Essentially after a certain point, the model would stop training, as if learning rate was set to zero.
Looking into Adam optimizer state revealed that exponential moving average of the gradient square is infinite for some tensors:

tensor([[[[-1.7468e-03, -1.6368e-03, -1.2466e-03, …, -1.4461e-03, -1.6768e-03, -9.0883e-04], …

tensor([[[[inf, inf, inf, …, inf, inf, inf], [inf, inf, inf, …, inf, inf, inf], [inf, inf, inf, …, inf, inf, inf] …

I have never seen this problem in previous versions of PyTorch. Is this a bug or a numerical problem with my training procedure?

It might be related to this fix:
EDIT: no, probably not related

1 Like

I managed to fix this by clipping the gradients with torch.nn.utils.clip_grad_norm_(self.actor_critic.parameters(), self.cfg.max_grad_norm)

Unfortunately this slows down the code, and I didn’t have to do it before. Weird.