Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false

I’m getting this error when using a custom loss function when calling torch.autograd.backward().
If I’m not mistaken it seems to have a problem with this part:

  File "custom_loss.py", line 184, in learn
    loss = self.criterion.forward(output, x)
  File "custom_loss.py", line 103, in forward
    A_2 = real_matmul(rho_2, (H_T[:, 2] - H_T[:, 1]))
  File "custom_loss.py", line 50, in real_matmul
    return A_real @ B_real - A_imag @ B_imag + 1j * (A_imag @ B_real + A_real @ B_imag)
 (Triggered internally at  /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)

The function calculates the matrix product of two complex matrices through real matrix multiplication, as bmm doesn’t support complex grads.
The error is then thrown here:

 File "custom_loss.py", line 185, in learn
  File "/home/fsoest/env/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/fsoest/env/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false.

I got the same problem when using torch.fft.fft().
As an alternative, I used the old representation, torch.fft(). This function can avoid this bug.

I had the same issue with custom functions using float tensors to return complex ones, I solved it by explicitly casting to complex the real and imaginary parts. You can try:

return (A_real @ B_real - A_imag @ B_imag).type(torch.complex64)  \
          + 1j * (A_imag @ B_real + A_real @ B_imag).type(torch.complex64)
1 Like

Super! Thanks a lot for sharing this.