Using pack_padded_sequence and torch.jit for GPU model

I have an RNN layer in a model that I wish to use torch.jit.trace on. When I have the model on CPU, it’s fine. However, when I load the model onto GPU, I face an error as the RuntimeError: 0INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646756402876/work/torch/csrc/jit/ir/alias_analysis.cpp":607, please report a bug to PyTorch. We don't have an op for aten::to but it isn't a special case. Argument types: Tensor?, int, bool, bool, NoneType.

I have located the source of the error as this line:

x = nn.utils.rnn.pack_padded_sequence(x, valid_frames.to(torch.device('cpu')), 
                                      batch_first=True, enforce_sorted=True)  

valid_frames is a cuda tensor when the rest of the model is. I have tried a couple of different methods, trying to avoid the usage of to but none of them seem to work.

I am using torch 1.11.

Is valid_frames annotated as a Optional[Tensor]? If so then you would need to either change it to be a Tensor, or wrap the .to() casting in if valid_frames is not None.