Hello,
I took the resnet50 PyTorch model from torchvision and exported to ONNX. When I ran it using image-classifier on first 1000 images of imagenet data set, i am seeing almost 20% accuracy loss from the resnet50 caffe2 model (on same 1000 images). It makes me wonder if the options i am using for running pytorch model is not correct. I am using “-use-imagenet-normalization” “-compute-softmax” (pytorch model does not softmax in the end) and “-image-mode=0to1”. Does input to pytorch model gets normalized in the same way as caffe2 model?
As a debugging step I would recommend to pass an input with constant values to both models and compare the outputs (e.g. torch.ones()).
If the outputs are equal, the issue is most likely created in the preprocessing of the data.
Thanks for the update. Unfortunately, I cannot be of much help, as I don’t know how these models (in Glow and ONNX) were created.
Just by reading through the ONNX link, it seems that the preprocessing is identical to the torchvision.models.