I have been using tracing to export pytorch models to C++ libtorch with no trouble so far on several different models. I usually train and export the models on linux GPU, and actually use them on windows, CPU. All good.
I however ran into a crash trying to do the same thing with this model:
- Tracing and running the libtorch model on GPU is fine
- on CPU, the model crashes when calling the forward() method.
I then tried to generate the traced model on CPU. Using this cpu model I am able to:
- load and run the traced model in python. Feeding it with the expected entries (i.e. a couple (image, mask)) produces the expected result
- load and run the model in c++ with dummy entries (random at::tensor of the correct size)
however, loading the model and running it with correct entries (image mask) crash when calling forward()
using this cpu model on GPU does not crash but produces erroneous output (black or uninitialized image output)
I am not able not isolate where the error might come from, although I suspect something goes wrong in the model tracing. I do not have any error message while tracing.
Any ideas ?
I’m using pytorch 1.3 as well as libtorch1.3 on both linux and windows