I have trained my model in Python, now I’m trying to export it to our producing env that must run model in c++.
I tried a few of ways(trace/script/onnx) to export the model, and basically got simimlar results.
For example, I did things in Python as following:
torch.set_grad_enabled( False )
model.eval()
model.freeze()
model.to_onnx( "path/to/file.onnx", input_sample=features, dynamo=True, export_params=True )
And then do inference in C++ according to the ONNX Tutorials.
The strange thing is that I got 43% results exactly same with the ones from Python.
Because it’s a multi-task classification(6 label variables, and each var should be classified into 3 classes), I think that this ratio isn’t probabally a random results.
To avoid directly comaring float numbers, I didn’t comare the logits or probability values, instead apply an ArgMax on both of Python and C++ sides before I compare the integer class indexes.
I’m a fresh man with DL, how can I trouble shot and where should I begin at?
Thanks!!!