Pytorch tensorrt compile

Tried to convert torch script model, got below error
“WARNING: [Torch-TensorRT] - Input 0 of engine _run_on_acc_0_engine was found to be on cuda:1 but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (Issues · pytorch/TensorRT · GitHub) if this warning persists.”

self.__network = torch.load(weight_file, map_location=self.__device) self.__network1 = torch_tensorrt.compile( = 'dynamo)

I have two gpu in my machine

Have you checked the device of the input and the model, as the warning points to a device mismatch?

Yes, model convert in gpu 0, try to load weights file in GPU 1, during loading input also GPU 1, so input and model in same device or GPU id,
I converted model in GPU 0, but tried to compile and infer in gpu 1, the same kind of error I am getting for torch torch-scripted model.

I’m unsure if the issue is caused by TorchScript, but could you move the model to the CPU before saving and move it back to the desired GPU after loading?