How to avoid device binding when convert pytorch model to trochscript?

Hi, all
I am trying to convert a VITS tts model into torchscript.
I simplly used the “torch.jit.trace()”. but there is newly created tensor in the model, the tensor is a length-scaled tensor.
“x = torch.arange(max_length, dytpe=length.dtype, device=length.device)”
sadly, after the convertion, the torchscript model keeps the device on tensor “x” as a hard-code device there, as a result, it can not inference on any other device from the one that convertion process had used.
I had tried to use torch.jjit.script instead of trace, it can not even make success conversion.
So it is there any way for the torch.jit.trace() convertion to support dynamic device in my instance?

I think this limitation in tracing is expected and scripting was the recommended approach to allow “dynamic” behavior. I don’t know if you are running into any issues using scripting but also note that TorchScript is in maintenance mode.

Thanks ptrblck,
I have planed to try to transform the VITS model into ONNX format now. because inference speed of the torchscript VITS model is Unbearably SLOW, It cost more then ten seconds to synthesize a text of moderate length on RTX3090 , Which may be caused by many recurrent structures in the VITS model. But the JIT convertion makes no imporovement on the inferene speed, and far more worse.