If I want to operate on both float and complex-valued tensors, what is the best practice? Should everything be in complex format throughout computation? Or is it fine and efficient to multiply float and complex tensors together? Assuming that they are of the same precision.

A simple exemple is for solving an ODE `y'(t) = f(t, y, θ)`

, I may have a forward function that looks like

```
def forward(self, t, y):
return torch.exp(1j * self.theta * t) * y
```

where `self.theta`

is a real-valued 1-D tensor of parameters, `t`

is the real-valued time, and `y`

is a complex-valued 1-D tensor. Should I store `self.theta`

and `t`

as float or complex objects?