RGB to Greyscale mean and standard deviation

Hi everyone :slight_smile:

I am working on a project where I want to compare the performance of CNNs on RGB images and the converted greyscale images. In PyTorch there is a handy transform torchvision.transforms.Grayscale(num_output_channels=1) to convert an RGB image into its greyscale version.
In the source code I found that they use the luma transform to do so: L = R * 299/1000 + G * 587/1000 + B * 114/1000. Essentially, this converts an RGB image into the Y plane of a YCbCr image (if I am not mistaken).
My question: do the mean and the standard deviation of a dataset of converted greyscale images change compared to the same dataset with RGB images? The mean should be the same right, since grey just means that R=G=B. What about the standard deviation?

Any help is very much appreciated!

All the best
snowe

You might want to check out this or this post.

1 Like

Thank you @RaLo4! :slight_smile:

I just tried it out and the mean and the std for both RGB and greyscale datasets seem to be the same with a difference of maybe 0.01 - 0.02.

I am just curious… from a theoretical perspective, does that make sense?

All the best
snowe