Second forward call of torchscripted module breaks on cuda

@ptrblck thank you very much, now it works.

Changes

torch~=1.13.0
tokenized_text_1 = tokenizer([text_1], **options)
inference_input_1 = {k:tokenized_text_1[k].to(device) for k in tokenized_text_1.keys()}