I have implemented a custom module that operates on complex tensors. The input, output and trainable weights in this module are with dtype=torch.complex64. When training this module with Adam optimizer on GPU device, I get the following error:
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/MyName/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/MyName/.local/lib/python3.6/site-packages/torch/optim/adam.py", line 119, in step group['eps'] File "/home/MyName/.local/lib/python3.6/site-packages/torch/optim/functional.py", line 87, in adam exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2) RuntimeError: "addcmul_cuda" not implemented for 'ComplexFloat'
It seems that the torch.addcmul function could not be applied on complex tensors when operating on GPU. To validate this limitation of pytorch, I run the following code:
a=torch.randn(3,4,dtype=torch.complex64).cuda() b=torch.randn(3,4,dtype=torch.complex64).cuda() c=torch.randn(3,4,dtype=torch.complex64).cuda() torch.addcmul(a,b,c)
As expected, the error message showed up:
RuntimeError: "addcmul_cuda" not implemented for 'ComplexFloat'
But when I run the above code on CPU, no error occurs.
I also tested several other Pointwise Ops like add, mul, addcdiv, etc. It seems that only addcmul and addcdiv don’t support this feature (operate on complex tensors on GPU).
So, my question is, how could I modify the addcmul/addcdiv source code so that I can use Adam optimizer for my custom Module that operates on complex tensors? Would this issue going to be fixed in the comming pytorch versions?
Thanks in advance.