I have a PyTorch model and then I convert it to CoreML model. I convert input type(multiArray) to image, the multiArray is 1x3x224x224, I delete batch size in input when convert it to Image. I have a doubt, does it influence the result?
Your PyTorch model should raise an error, if you try to pass the image tensor without a batch dimension. I don’t know, how CoreML handles it and you might get a better answer in their repository or discussion board, if it exists.
@ptrblck How to make the input and output both are image in PyTorch
@ptrblck Hi can I add scaler into the model of PyTorch, and then add preprocess_args to my mlmodel file. And then how to change the input type of multiArray to image. I want to do an image_classifier.