I want to deploy the model in C++, but i found the result i got in libtorch is different to pytorch. I can got 90% accuracy in pytorch but 20% in C++. How could lead to the difference?
I just give the model the same data input size. Here is my code for input:
How did you load the image in C++?
If you’ve used OpenCV, note that the default color channel format is BGR, while torchvision uses PIL by default, which is using RGB.
If that might be the case, then you would have to convert the color channels in your C++ implementation.
@ptrblck, in notice the img channel has been changed in the opencv and i have add the code cvtColor(img, img, COLOR_RGB2BGR); to make the channel right. And the results is also bad in the C++. did the model of “.pt” transformed from “.pth” would inference the accuracy?
No, the file suffix shouldn’t change anything.
Did you make sure to apply exactly the same preprocessing pipeline in C++ as was used in Python (normalization etc.)?
If you are using the same preprocessing and load the same scripted or traced model, then the results should be the same up the the limited floating point precision.
In that case I would recommend to compare the posted code to the other code, which yields the higher accuracy, and try to narrow down potential discrepancies between these codes.
@ptrblck, i check the code which is the same as what in pytorch about the data pre-procession. And i found there are two ways to transform pytorch model to torch script, which is trace and annotation. And i use trace to do that. The official said that the trace would limit the control flow such as “if else”, which several exist in my code. i wonder is real the control flow was limited that the trace-torch script can’t get the same accuracy as pytorch did?
That might be the case, as the executed forward pass with be traced and other paths through the model won’t be used.
Therefore you should use a scripted model, which will collect the conditions.