So my question is, how can I get the same result like this without using F.interpolate for inference. I’m reading image using opencv. And also normalize it to bound values between -1 to 1.
The interleaving in the processed image is most likely created by a view or reshape operation using a wrong memory layout (channels-last vs. channels-first).
I assume this operation might cause the trouble:
im_np = im_np.reshape((im_rh, im_rw, 3))
Could you check the shape of im_np before applying the reshape operation? If the channels are in dim0, either change the reshaping opertation or permute the dimensions first.
Exactly, the problem is not with F.interpolation or cv2.resize . Actually, im_np.reshape is not the correct choice here. I did it by this way and it works very well.
im = cv2.imread('/content/image.jpg')
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
# normalize values to scale it between -1 to 1
im = (im - 127.5) / 127.5
# find scale factor - fx & fy
# resize image
im = cv2.resize(im, None, fx = x, fy = y, interpolation = cv2.INTER_AREA)
# prepare input shape
im = np.transpose(im)
im = np.swapaxes(im, 1, 2)
im = np.expand_dims(im, axis = 0).astype('float32')