Reading Image from torchvision and cv2 is not same

import torchvision
import torch
img=torchvision.io.read_image(path='/home/tejanm/Downloads/subjective_img/subjective_img/sample1.jpg')
img.dtype, img.shape

The above snippet outputs:

(torch.uint8, torch.Size([3, 500, 375]))
import cv2
img1 = cv2.imread('/home/tejanm/Downloads/subjective_img/subjective_img/sample1.jpg')
img1 = torch.from_numpy(img1)
img1 = img1.permute(2, 0, 1)
img1.dtype, img1.shape

The above snippet also outputs:

(torch.uint8, torch.Size([3, 500, 375]))

The reading of the image from both torchvision and cv2 has the same dtype and shape.

But,

(img1 == img).all()

is:

tensor(False)

torchvision uses PIL by default, so you could check the result arrays in PIL vs. OpenCV, which might use different decoders etc.
It would also be interesting to see how large the abs().max() difference is between both arrays.

No.
The documentation says:

torchvision.io.read_image(path: str, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>) → torch.Tensor

where

path (str) – path of the JPEG or PNG image

So torchvision will simply read the image and output the tensor. There is no interference of PIL in between.

I don’t understand. Could you please explain in detail?