Convert to CoreML but predict wrong

I convert the PyTorch model to CoreML model , but it always predict wrongly, I don’t know how to solve this issue.
The model input type is multiArray 13224224, and output is multiArray 1 7.
When predicted , it always show the same results.
I ever add preprocess_args when converted onnx to CoreML , but it cannot get the same results.

It seems like a dublicate from your other threads on this topic.

Based on all descriptions so far, it seems that you don’t know:

  • how the PyTorch training was done
  • what preprocessing steps were performed
  • how to match the CoreML output to the PyTorch model output

If that’s still the case, I doubt we can help you any further.

My best suggestion was:
Try to add a preprocessing pipeline manually, retrain the model, and export this known workflow to CoreML.
We are mainly focused on PyTorch so most likely you won’t get much help regarding the CoreML deployment and you should ask in their discussion board or in their GitHub repo, if you encounter any issues with CoreML.

preprocess in the PyTorch
In their you said if you never did image preprocess , in my CoreML model I should ignore the image preprocess. But CoreML only know the pixel 0- 255, but if model of pytorch never did image preprocess, how to deal with the model to get the right answer.

So I want to ensure three cases about it

  • 1, Is it possible to change the input type to image_ from multiArray to image
  • 2, is it possible to add image preprocess in the model.
  • 3, How to get the preprocess steps about it.
    I don’t know how the model trained the preprocess args, I asked the engineer, he replied he didn’t know.
    Convert onnx to CoreML , i add preprocess args about it.
    I only add image_scale:1/255.0, red_bias/blue_bias/green_bias (never added it)
    I know even though in the pytorch model added the preprocess but when it converted to onnx, it doesn’t support image preprocess, so when convert to CoreML, I must do it manually. But perhaps they are same.
    By the way, if never did image preprocess in PyTorch model, what will be happened? Or MultiArray never need to image preprocess.
    What’s the difference between multiArray and image in the field of preprocess?
    Thank for your info. It’s so painful.
    Actually in Python training code, it should have preprocess_input or normalize function, but if the training model doesn’t have the step, does it influence the result?
  1. I don’t know, what multiArray is and I assume it’s part of the CoreML model? If so, I would recommend to ask in their board or github.

  2. Yes, as answered here with an example.

  3. If neither you nor the engineer who was working on the model knows, which preprocessing was done, unfortunately we cannot help. My best shot is still to retrain the model with a known preprocessing pipeline.
    Depending what preprocessing steps were done, ONNX might support them. E.g. normalization is a simple subtraction and division, which is supported by ONNX. More complicated image transformation might not be supported and you should implement them in CoreML or any other library supported by your deployment platform. Since we do not know which preprocessing was done, it won’t really help discussing, which methods are implemented in ONNX etc.

MultiArray doesn’t seem to be a PyTorch class and it seem your question was already answered in the onnx-coreml github.

Normalization usually speeds up the training or makes it possible in the first place.

If normalization or any other preprocessing technique was already used to train your model, you have to add these steps to your deployment pipeline. Otherwise your model will just output random values and will most likely yield a bad accuracy.

MultiArray should be from Numpy
for Image classifier image preprocess is necessary, if you don’t do the task, the model never know what it is.So that’s why my model predicts wrongly.