Hi,

I have a doubt for autograd for complex-valued neural networks(Autograd mechanics — PyTorch 1.11.0 documentation).It seems that autograd works when differentiating complex-valued tensors. But does it also work for layers in a neural network?

So lets say I define a F.Linear layer, can we define the weights to be complex-valued and will the updates to this layer follow complex-valued backpropogation? Also, can the same thing be done for nn.Conv2d layer?

Hi Anirudh!

Yes, autograd performs complex backpropagation through (most, if not

all) pytorch layers and with respect to those layers’ parameters.

Consider this `Linear`

example:

```
>>> import torch
>>> torch.__version__
'1.11.0'
>>> _ = torch.manual_seed (2022)
>>> lin = torch.nn.Linear (2, 2, bias = False, dtype = torch.complex64)
>>> lin.weight
Parameter containing:
tensor([[-0.1474+0.5967j, 0.3660-0.1681j],
[-0.6700-0.1989j, 0.4149+0.3975j]], requires_grad=True)
>>> t = torch.randn (5, 2, dtype = torch.complex64)
>>> t
tensor([[-0.5672+0.5518j, -0.8078+0.3621j],
[ 0.0075+0.4234j, -0.4115-0.4382j],
[-1.5592+0.4851j, 0.6728-0.1796j],
[ 0.4976+0.4027j, -0.9824-0.7075j],
[-0.1651-0.3127j, -0.6328-0.4980j]])
>>> lin (t).norm().backward()
>>> lin.weight.grad
tensor([[-0.3748+1.0385j, 0.6620-0.4970j],
[-1.2673-0.4408j, 0.8804+0.7391j]])
```

Note, the implementation of complex gradients has built into it, in some

sense, the notion that you are calculating the gradient of a real-valued

loss function.

If you calculate the gradient of a complex “loss” function, you might not

get the results you expect (depending upon your expectations).

Pytorch’s support for complex manipulations is still a work in progress, but

most (if not all) of the basics have been implemented and work correctly.

Best.

K. Frank

Hi K.Frank,

Thanks for replying.I was trying to do the same for convolutional layer.

The following code is a normal convolution operation.

m = torch.nn.Conv2d(16, 33, (3, 3))

input = torch.randn(20, 16, 50, 50)

output = m(input)

print(output)

But when I do it for complex-convolution,

m = torch.nn.Conv2d(16, 33, (3, 3),dtype=torch.complex64)

input = torch.randn(20, 16, 50, 50,dtype=torch.complex64)

output = m(input)

print(output)

It gives me the following error,

RuntimeError: “slow_conv2d_cpu” not implemented for ‘ComplexFloat’.

Hi Anirudh!

You’re right, `Conv2d`

does not yet have complex support in the latest

stable release (1.11.0) – that’s my fault for not double-checking.

(As I said, pytorch’s complex support is still a work in progress.)

However, `Conv2d`

does have complex support in the latest nightly build

(a 1.12 version):

```
>>> import torch
>>> torch.__version__
'1.12.0.dev20220510'
>>> _ = torch.manual_seed (2022)
>>> conv = torch.nn.Conv2d (1, 1, 3, dtype = torch.complex64)
>>> conv (torch.randn (1, 1, 3, 3, dtype = torch.complex64))
tensor([[[[0.1691-0.2932j]]]], grad_fn=<AddBackward0>)
```

Best.

K. Frank

Hi,

Can you please tell me how the complex-valued weights are initialized for complex-convolution and complex-valued linear layers?

My assumption is that both the real and imaginary components are independently initialized using the Xavier uniform initialization.