Is It normal about the output

I received one model trained by PyTorch and then convert it to CoreML , I used it in my project and then it printed the output as below:
[[46773.918 25729.756 74242.11 29962.623 91158.53 68990.445 67877.82 ]]

[[-590.66034 829.97705 -125.67301 -263.34863 448.17392 253.08405
-753.61444]]

Could you explain your issue a bit more?
Is this output unexpected and do you see any other outputs in your Python script?

I have one project model - input is image and output is multiArray 1x7 (Float32)
I convert PyTorch to CoreML , but use python to invoke the model, it works well. Its output is less than 20(Float32). So how to prove the output is right?

I would recommend to use a defined input (either constant values or load some values from a file) and compare the outputs based on the fixed input.

@ptrblck Yeah I use the same pic to compare the result but the pc client got the output is not same as mine(in the field of iOS client).We both use same trained model. The difference is that I convert it within two steps. Firstly convert to onnx and then convert onnx to CoreML. In my mind when the input is same ,the output should be same. So the array of npy should be same.
I know the code as below:
But it doesn’t have the function of normalize only use the function of toTensor. Is it the image preprocess option?

ToTensor normalizes the tensor, so that all values lie in [0, 1].
If you’ve used it in the PyTorch script, you should definitely add it in your CoreML model.
I’m not familiar with CoreML and don’t know, how easy that is.
If you have trouble implementing it there, you might consider adding your preprocessing to the forward method of your model.

@ptrblck I know in PyTorch it never support image preprocess, so when you convert to CoreML , you must manually do it by yourself, I must add preprocess_args when converting to CoreML. But the doubt about it I am meeting is I don’t know the correct preprocess_args because the model is not trained by me. The guy trained the model had left. So for this scene how to apply the preprocess_args
I set the args as below:
image_scale:1/255.0
red_bias/green_bias/blue_bias: 0.
Is it right?
I searched the document and then it never told me about it .
I know the preprocess_args as below:
image_scale = 1.0/std
red_bias = -red_mean /std
green_bias = - green_mean /std
blue_bias = - blue_mean/std
But I don’t know the value mean for every color.

If I understand your use case correctly, you only have the model definition in PyTorch with the state_dict, but the actual code to train the model is missing?

If that’s the case, I think it will be too painful to try to guess how the data was processed.
Do you have any information about the preprocessing or any other steps?

If not, I would rather use the model and retrain it on your data with your known preprocessing pipeline.
Once you have your desired accuracy, you could start to deploy it.

@ptrblck Yeah it’s so painful because Machine learning engineer didn’t know about the preprocess_args .
The project is emotion detection.I didn’t have source code to train it again. So because lack of preprocess_args I converted successfully but it predict wrongly. I know TensorFlow can do it , but PyTorch need me to do it manually.
I stayed this question within 10 days just because I don’t know the preprocess_args. The input of model is image and then output is multiArray 1 x 7.