Can't load and test pytorch model on images, which converted from tensorflow to onnx to pytorch

getting an error like RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 224, 4, 225] to have 3 channels, but got 224 channels instead

initially tensorflow trained model was using efficientnet_b0 architecture and converted this model from tf to onnx by tensorflow-onnx and onnx to pytorch using onnx2pytorch library and few lines of code was :
onnx_model = onnx.load(‘eff_model.onnx’)
pytorch_model = ConvertModel(onnx_model)
torch.save(pytorch_model, ‘effnet_model_.pth’)

but can’t get an outputs when loading model and predicting on real images
transform = transforms.Compose([
transforms.Resize(224),
# transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
])
model = torch.load(‘effnet_model_.pth’)
test = (‘test_images/’)
test_data = torchvision.datasets.ImageFolder(root = test,transform = transform)
test_loader = torch.utils.data.DataLoader(test_data, batch_size = 1)

with torch.no_grad():
for data in test_loader:
images, labels = data
outputs = model(images)
Please help out. thanks
@ptrblck your guidance helps a lot

Your current code seems to contain multiple issues:

  • The input shape is [1 ,224, 4, 225] which is neither a channels-first memory format (expected) not a channels-last one. Make sure to pass the input in the shape [batch_size, channels, height, width]. Also, you might want to pass the size argument to Resize as a tuple to make sure both spatial dimensions are equal.
  • Even after permuting the input, it would still have 4 channels while 3 are expected.