Is it possible to get per channel mean and variance for images in pytorch?

Calling images.mean() (or std) like this will take the mean of the entire tensor, producing a single value, not the per channel mean (or std) that you would like.

One way to get the mean for each channel, you could do the following (assuming your image is shaped like (L, W, 3)):

mean_c1 = images[:, :, :, 0].mean()
mean_c2 = images[:, :, :, 1].mean()
mean_c3 = images[:, :, :, 2].mean()

Then to centre the image you could do:


centred_images[:, :, :, 0] = images[:, :, :, 0] - mean_c1
centred_images[:, :, :, 1] = images[:, :, :, 1] - mean_c2
centred_images[:, :, :, 2] = images[:, :, :, 2] - mean_c3

Alternatively, you can search for torchvision.transforms. Normalize here

1 Like