I have a problem with PytorchMobile on Android. I have trained a model on Python and converted it to Android version. When I feed the models with Torch.ones both of them produce the same output. I read the png image in Android with:
final Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(img,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB);
in Python, I read the image with PIL.Image.open(path) and use torchvision.transforms.ToTensor() to convert it to Tensor. These two operations result in different outputs and also changes the model output accordingly. Can you help me with this so that both image read operations produce the same input tensor for my mode? Thanks.
final Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(img,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB);
Normalizes input by mean and std of ImageNet dataset that was used to train pretrained torch vision models:
public static float[] TORCHVISION_NORM_MEAN_RGB = new float[] {0.485f, 0.456f, 0.406f};
public static float[] TORCHVISION_NORM_STD_RGB = new float[] {0.229f, 0.224f, 0.225f};
Is this the right normalization for your model?
Could you share details what is the model and how big is the difference in the results?
No, these values are not valid for my model and I want to skip the input normalization step at all since I do not apply that in the original Python model as well. When I read the image with this code, it returns me some negative values which I do not want. What I want is to just read the image and convert the RGB values between 0-1. Here is how I read the input image in Python:
Okay found the solution. Using [0, 0, 0] as means and [1, 1, 1] as stdevs does what I want to do. It skips the normalization and results with the same Tensor as Python version.
float [] means = new float[] {0.0f, 0.0f, 0.0f};
float [] stds = new float[] {1.0f, 1.0f, 1.0f};
final Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(img, means, stds);
Why there is no bitmapToFloat32Tensor(img) function to skip normalization step ? Or did I miss it ?