Problem when using different learning rates per module

Hello
I am working with a deeplab like model and
I am encountering the following error:
RuntimeError: value cannot be converted to type float without overflow (0.000140182, -4.55471e-05) .../optimization/sgd.py, line 107 in step p.data.add_(group['lr'], d_p)
This happens when I try to pass to the optimizer parameters group of the form
[{‘params’: self.net.backbone.parameters()},
{'params': self.net.head.parameters(), 'lr':10.0*lr0}]
Even if I use 'lr':lr0 ( the same learning for all modules ) I still receive the same error.
However if I simply use self.net.parameters() the model converge without issues.
I am using a custom _LRScheduler (power scheduling) that appears to be honouring
Correctly multiple parameters groups( I am also logging the SGD optimizer
and it shows the correct learning rates for the groups). And this errors only appears in both cases
After 20 epochs.

Thanks