Convert to CoreML but predict wrong

preprocess in the PyTorch
In their you said if you never did image preprocess , in my CoreML model I should ignore the image preprocess. But CoreML only know the pixel 0- 255, but if model of pytorch never did image preprocess, how to deal with the model to get the right answer.

So I want to ensure three cases about it

  • 1, Is it possible to change the input type to image_ from multiArray to image
  • 2, is it possible to add image preprocess in the model.
  • 3, How to get the preprocess steps about it.
    I don’t know how the model trained the preprocess args, I asked the engineer, he replied he didn’t know.
    Convert onnx to CoreML , i add preprocess args about it.
    I only add image_scale:1/255.0, red_bias/blue_bias/green_bias (never added it)
    I know even though in the pytorch model added the preprocess but when it converted to onnx, it doesn’t support image preprocess, so when convert to CoreML, I must do it manually. But perhaps they are same.
    By the way, if never did image preprocess in PyTorch model, what will be happened? Or MultiArray never need to image preprocess.
    What’s the difference between multiArray and image in the field of preprocess?
    Thank for your info. It’s so painful.
    Actually in Python training code, it should have preprocess_input or normalize function, but if the training model doesn’t have the step, does it influence the result?