Custom transform output does not match the original

I am trying to implement the below transforms myself but custom outputs and torch outputs are different. A portion of the output is below (some pixel values from different channels). What am I doing wrong?

Note: I do not need tensor as output.

Custom transforms

@njit() 
def normalize_cv2(image, mean, std):
    for d in range(3):
        image[d, :, :] = np.divide(image[d, :, :], 255)
        image[d, :, :] = np.divide(np.subtract(image[d, :, :], mean[d]), std[d])
    return image

cvresized = cv2.resize(cvimage, (250, 250))
cvresized = cv2.cvtColor(cvresized, cv2.COLOR_BGR2RGB)
cvresized = np.array(cvresized, dtype = np.float32) 
cvresized = np.transpose(cvresized, [2, 0, 1])
normalized_cvimage = normalize_cv2(cvresized, mean, std)

Torch transforms

transform = transforms.Compose([
    transforms.Resize((250, 250)),
    transforms.ToTensor(),
    transforms.Normalize(mean, std)  
])

Outputs

1 - Torch output:  tensor(-0.3176)
1 - Custom output:  -0.30196077
2 - Torch output:  tensor(-0.2627)
2 - Custom output:  -0.24705881
3 - Torch output:  tensor(-0.1922)
3 - Custom output:  -0.19215685
4 - Torch output:  tensor(-0.0980)
4 - Custom output:  -0.11372548

I don’t know how you are comparing these two outputs, but make sure to create a clone of the input numpy array, since you are manipulating it inplace.
Also, note that ToTensor() will normalize numpy arrays, if they are passed in the uint8 dtype, not if they are using the float32 type.
Additionally, I’ve removed the resizing, as I don’t know what kind of interpolation is used by OpenCV by default.
Given that, your method works fine:

def normalize_cv2(image, mean, std):
    for d in range(3):
        image[d, :, :] = np.divide(image[d, :, :], 255)
        image[d, :, :] = np.divide(np.subtract(image[d, :, :], mean[d]), std[d])
    return image


mean = np.array([0.5, 0.5, 0.5])
std = np.array([0.5, 0.5, 0.5])
cvresized = np.random.randint(0, 256, (3, 250, 250)).astype(np.float32)
x = copy.deepcopy(cvresized)
normalized_cvimage = normalize_cv2(cvresized, mean, std)

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(mean, std)  
])


out = transform(x.transpose(1, 2, 0).astype(np.uint8))
print(np.max(np.abs((out.numpy() - normalized_cvimage))))
> 0.0

I guess differences occur due to resizing operations of PIL and OpenCV. Even I use BILINEAR in PIL and INTER_LINEAR in OpenCV there is a difference in outputs.