Complex valued neural network

Hi,

I am trying to used complex valued data as input to a test neural network. From the release notes, PyTorch 1.8.0 is said to support complex autograd. My code is as follows.

import torch
from torch import nn, optim


class ComplexTest(nn.Module):
    def __init__(self):
        super(ComplexTest, self).__init__()
        self.fc1 = nn.Linear(10, 20)
        self.fc2 = nn.Linear(20, 10)
        self.relu = nn.ReLU()

    def forward(self, inputs):
        return self.fc2(self.relu(self.fc1(inputs)))


device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
complex_test = ComplexTest().to(device)
complex_test.train()

opt = optim.Adam(complex_test.parameters())

mse_loss = nn.MSELoss()

for _ in range(100):
    opt.zero_grad()

    inp = torch.randn((1000, 10), dtype=torch.cfloat).to(device)
    op = complex_test(inp)

    loss = mse_loss(op, inp)
    loss.backward()
    opt.step()
    print(loss.item())

But this gives an error

expected scalar type Float but found ComplexFloat

Is this not supported or did I read the documentation wrong? Thanks!

Hi, when you create nn.Linear it is by default initialized with Float, you may be need to cast the network weights to be complex after you create it.

@omarfoq, thank you for your response! I tried adding to(torch.cfloat) like you mentioned and my code looks as below.

import torch
from torch import nn, optim


class ComplexTest(nn.Module):
    def __init__(self):
        super(ComplexTest, self).__init__()
        self.fc1 = nn.Linear(10, 20).to(torch.cfloat)
        self.fc2 = nn.Linear(20, 10).to(torch.cfloat)
        self.relu = nn.ReLU()

    def forward(self, inputs):
        return self.fc2(self.relu(self.fc1(inputs)))


device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
complex_test = ComplexTest().to(device)
complex_test.train()

opt = optim.Adam(complex_test.parameters())

mse_loss = nn.MSELoss()

for _ in range(100):
    inp = torch.randn((1000, 10), dtype=torch.cfloat)
    op = complex_test(inp)

    loss = mse_loss(op, inp)
    loss.backward()
    opt.step()
    print(loss.item())

Now the layers do not have problem with the complex valued input. But I could not figure out how to do this for activation function. The following error thereby pops up.

“threshold_cpu” not implemented for ‘ComplexFloat’

How can I proceed?

Hello,

I think ReLU activation doesn’t work for complex numbers

Hello again! So the dummy setup that I have mentioned is not runnable under any circumstance with pytorch 1.8.1? Was really looking forward to it.

Hello,

ReLU is in fact a function based on comparing a real number with 0, and as there is no natural order for complex numbers, doing the max between a complex number and zero is not well defined, and so is ReLU for complex numbers.

Depending on what problematic you are working, I think you can define your own activations or use activations that are adapted to complex numbers (this includes all holomorphic functions)

1 Like

Hello,
For those, who are searching complex valued NN/CNN implementations, the following library can be of help:

It implements the complex value based activations and layers. The Relu function for complex values has been implemented as CRelu.