Expected scalar type Float but found ComplexDouble

Hi,
I’m trying to run a NN with complex tensors.
I keep getting the following error: RuntimeError: expected scalar type Float but found ComplexDouble
I suspect maybe it is because the weights are not complex?
If so- how do I change it?

> class NeuralNet(nn.Module):
>     def __init__(self, input_nodes, hidden1_nodes, hidden2_nodes, hidden3_nodes, output_nodes):
>         super(NeuralNet, self).__init__()
>         self.type(torch.complex128)
>         self.fc1 = nn.Linear(input_nodes, hidden1_nodes)
>         self.relu = nn.ReLU()
>         self.fc2 = nn.Linear(hidden1_nodes, hidden2_nodes)
>         self.fc3 = nn.Linear(hidden2_nodes, hidden3_nodes)
>         self.fc4 = nn.Linear(hidden3_nodes, output_nodes)

Thanks!

  1. use .to these days instead of .type (which is a very odd function and the most surprising thing is that it hasn’t been deprecated 2 years ago).
  2. .to (or .type) changes the state of the NeuralNet. This also changes all submodules. But if you add submodules later, these will not be party to the change but will have the default dtype (float).
  3. this suggests that moving the .to to after the submodule registration should work better.

Best regards

Thomas

Thanks!
I was not aware of this. But to throws me the following error:

nn.Module.to only accepts floating point dtypes

However, staying with type but moving it to after registration did do something, cause now I get the error:

RuntimeError: addmm does not support automatic differentiation for outputs with complex dtype.

Any suggestions?

I’d probably write my own ComplexLinear class that starts out with complex parameters and uses matmul instead of addmm.

Got it.
Thanks for your time!