In order to avoid Null loss with Adam optimizer using fp16 autocast, I must modify the eps value from 1e-8 to 1e-6. However, I found that by doing this, my model is much slower to converge, or not converge at all. Does anyone know why this would be?
Could you explain what issues you are seeing in the loss and what
Null means in this case?
the issue being when eps is set to be 1e-8, as is the default, and used with autocast, the network’s loss will inevitably be Null after some epoch. However, increasing eps to higher value seems to make it go away, yet when compare to training not done on fp16, the convergence is much slower.
Are you seeing
NaN loss values after a time or what is
Null referring to?
If so, what kind of model architecture are you using?
sorry, I meant Nan loss. I’m performing NAS search for pose estimation.
I’m not sure if your NAS implementation initializes the model parameters with “large” values explicitly or if the general training of these new architectures tends to create large output values, which might cause overflows easily. In that case I’m unsure if there is a better workaround than to increase the
eps value, as
1e-8 might underflow in FP16 if you add it to a larger tensors.
Since eps control the ratio in which step size is decided, a larger eps would indicate a smaller step size. Would you suggest that I increase the overall lr rate to compensate ?
eps value is used here to avoid dividing by a
I just checked the
autocast and it seems all internal buffers are still stored in FP32, so I’m unsure why the
eps value might cause trouble in this case (due to a potential underflow).
Here is a minimal code snippet:
model = nn.Linear(10, 10).cuda() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) with torch.cuda.amp.autocast(): out = model(torch.randn(1, 10).cuda()) loss = out.mean() loss.backward() optimizer.step() print(optimizer.state_dict())
@mcarilli are you familiar with similar issues using
I use GradScaler, is this the problem?
can you use autocast independently from GradScaler?