RuntimeError: "addcmul_cuda" not implemented for 'ComplexFloat'

Hello,

I have implemented a custom module that operates on complex tensors. The input, output and trainable weights in this module are with dtype=torch.complex64. When training this module with Adam optimizer on GPU device, I get the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/MyName/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/MyName/.local/lib/python3.6/site-packages/torch/optim/adam.py", line 119, in step
    group['eps']
  File "/home/MyName/.local/lib/python3.6/site-packages/torch/optim/functional.py", line 87, in adam
    exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
RuntimeError: "addcmul_cuda" not implemented for 'ComplexFloat'

It seems that the torch.addcmul function could not be applied on complex tensors when operating on GPU. To validate this limitation of pytorch, I run the following code:

a=torch.randn(3,4,dtype=torch.complex64).cuda()
b=torch.randn(3,4,dtype=torch.complex64).cuda()
c=torch.randn(3,4,dtype=torch.complex64).cuda()
torch.addcmul(a,b,c)

As expected, the error message showed up:

RuntimeError: "addcmul_cuda" not implemented for 'ComplexFloat'

But when I run the above code on CPU, no error occurs.
I also tested several other Pointwise Ops like add, mul, addcdiv, etc. It seems that only addcmul and addcdiv don’t support this feature (operate on complex tensors on GPU).

So, my question is, how could I modify the addcmul/addcdiv source code so that I can use Adam optimizer for my custom Module that operates on complex tensors? Would this issue going to be fixed in the comming pytorch versions?

Thanks in advance.

Hi, I’m having the same issue. Have you found a solution yet?

Hi Xulin and Zhang!

Support for complex tensors in pytorch is a work in progress. I find,
just by trying, that addcmul() does not work with complex gpu tensors
using pytorch version 1.6.0, but does work with a recent nightly build,
version 1.8.0.dev20201203.

If you’re just experimenting with complex tensors, you might want to
upgrade to a current (unstable) nightly build, but if you’re doing “real”
work, it might make sense to avoid complex tensors until support for
them becomes stable.

Best.

K. Frank

Thanks for the help. This might be asking for too much, but do you know where I can find this type of information? Another part where I get stuck is complex tensor @, and I would want to check if this is available in preview as well.

Hi Frank,

Thank you for the information. I will try it on my case latter.

Here is what I did to circumvent this issue:
I converted complex128 tensor to a double channel float64 tensor, with its first channel stores the real part and the second channel stores the imag part. I also custom my own Linear and Conv2d layers, which take in the double channel tensor and perform Mul and Add operations according to the complex operation rules.

After these customization, the network has no complex calculation in its pipeline and could be optimized with traiditional optimizers.

Hope this information would be valuable to anyone in need.

Best,
Yi Zhang