Resnet50 Pytorch Model Accuracy Loss

Hello,
I took the resnet50 PyTorch model from torchvision and exported to ONNX. When I ran it using image-classifier on first 1000 images of imagenet data set, i am seeing almost 20% accuracy loss from the resnet50 caffe2 model (on same 1000 images). It makes me wonder if the options i am using for running pytorch model is not correct. I am using “-use-imagenet-normalization” “-compute-softmax” (pytorch model does not softmax in the end) and “-image-mode=0to1”. Does input to pytorch model gets normalized in the same way as caffe2 model?

What can I check to debug this further?

Where do you have the caffe2 model from?

As a debugging step I would recommend to pass an input with constant values to both models and compare the outputs (e.g. torch.ones()).
If the outputs are equal, the issue is most likely created in the preprocessing of the data.

I got caffe model from “http://fb-glow-assets.s3.amazonaws.com/models” which is the url in glow to download models.
https://github.com/pytorch/glow/blob/master/utils/download_datasets_and_models.py

I have also tried models from onnx zoo, https://github.com/onnx/models/tree/master/vision/classification/resnet/model
“resnet50-v2-7.onnx” is showing same accuracy loss.

For same input, models does show different outputs. I have not tried with torch.ones but for same image.

Thanks for the update. Unfortunately, I cannot be of much help, as I don’t know how these models (in Glow and ONNX) were created.
Just by reading through the ONNX link, it seems that the preprocessing is identical to the torchvision.models.

Thanks for getting back to me.