Problem when using different learning rates per module

I am working with a deeplab like model and
I am encountering the following error:
RuntimeError: value cannot be converted to type float without overflow (0.000140182, -4.55471e-05) .../optimization/, line 107 in step['lr'], d_p)
This happens when I try to pass to the optimizer parameters group of the form
{'params':, 'lr':10.0*lr0}]
Even if I use 'lr':lr0 ( the same learning for all modules ) I still receive the same error.
However if I simply use the model converge without issues.
I am using a custom _LRScheduler (power scheduling) that appears to be honouring
Correctly multiple parameters groups( I am also logging the SGD optimizer
and it shows the correct learning rates for the groups). And this errors only appears in both cases
After 20 epochs.