I have a fine-tuned pretrained classification vision model that I want to execute in a Jetson module.
Once the model is trained in my custom dtadaset, I convert to ONNX and then in the Jetson module I convert to TensorRT engine with trtexec.
I collect the image for inference from a USB camera in jpg format that I convert to a cuda zero copy memory array for later inference. This image is in memory in HWC (channel last) format and is the only format supported by the jpg decoder library.
I used to have Tensorflow->ONNX->TensorRT models that I can use directly with the memory images.
However in Pytorch the model is expecting CHW (channel first) images.
I understand from this that is more efficient to use channel-last models and I have the input already in channle-last format.
I would like to (some how) convert my pytorch model to TensorRT engine that accept channel-last images so I dont need to add extra preprocessing to my images and also get performance benefits.
Is that possible? How?
I have tried to do:
model = model.to(memory_format=torch.channels_last)
before export to ONNX with torch.onnx.export, but editing the ONNX model still image is having type:
float32[batch_size,3,img_wheight,img_width], so I suppose once I convert to TensorRT engine still will expect the image in channel-first format.
What is the best possible solution?