Model.forward() automatically converts complex128 [a+bj] to stacked complex form [a,b]

Just before the return, the output format is complex128,
but in the forloop during the training the output is automatically converted to stacked complex form. Just as below:

    def forward(self, input):
        output=custom_function(input)
        print(output)
        return output

result: a+bj

----main----
output=model.forward(input)
print(output)

result: [a, b]

I don’t understand how this is happening and how to fix it.

Can I get some advice?
Thanks for reading!

FYI, I have wrapped the model with Dataparallel.
Maybe this causes the issue?

class DataParallelPassthrough(torch.nn.DataParallel):
    def __getattr__(self, name):
        try:
            return super().__getattr__(name)
        except AttributeError:
            return getattr(self.module, name)


model=DataParallelPassthrough(model)
model.cuda()