Adamw param.mul_(1 - lr * weight_decay) RuntimeError: result type ComplexFloat can't be cast to the desired output type Float

I use the adamw as the optimizer and after the training run a day I got this problem:

[epoch][s/s_per_e/gs]: [99][304/319/31899], lr: 0.000001000796, loss: 0.251922130585
[epoch][s/s_per_e/gs]: [99][305/319/31900], lr: 0.000001000000, loss: 0.198185890913
Traceback (most recent call last):
File "train_main.py", line 745, in
main()
File "train_main.py", line 740, in main
main_worker(0, ngpus_per_node, args)
File "train_main.py", line 591, in main_worker
optimizer.step()
File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/optim/optimizer.py", line 89, in wrapper
return func(*args, **kwargs)
File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/optim/adamw.py", line 121, in step
group['eps'])
File "/home/bigtree/miniconda3/envs/color/lib/python3.7/site-packages/torch/optim/<em>functional.py", line 122, in adamw
param.mul</em>(1 - lr * weight_decay).

RuntimeError: result type ComplexFloat can't be cast to the desired output type Float

Please help.

It turns out that the lr goes to very small < 1e-6. After fixed this the problem solved.

That sounds quite weird. Do you have a minimal code snippet to reproduce the error message by changing the learning rate?

I was getting the same issue at line optimizer.step(). The error was:
RuntimeError: result type ComplexFloat can't be cast to the desired output type Float.
Solution:
The problem was that the learning rate was taking complex values. So once I fixed that, the issue was resolved.
I was setting the learning rate using:

for param_group in optimizer.param_groups:
	current_lr = <custom function to calc learning rate>
    param_group['lr'] = current_lr