Gray Layer for image

Hi, I want to write a grayscale layer for my image that convert my image from rgb to gray, However, I have already processed the image and I have to operate on the tensor. My primary implementation is like the following:
(x is the image)
result = 0.299 * x[:, 0] + 0.587 * x[:, 1] + 0.114 * x[:, 2]
return result.unsqueeze(1)

Is there a simpler way or is this correct?

You can use a torchvision transform link.

If you don’t want to convert your image tensor to a PIL.Image first, you could also use this code:

f = torch.tensor([0.299, 0.587, 0.114])
x = torch.randn(64, 3, 224, 224)

result1 = 0.299 * x[:, 0] + 0.587 * x[:, 1] + 0.114 * x[:, 2]
result2 = (x * f[None, :, None, None]).sum(1)
print((result1 == result2).all())

However, I think you won’t notice a huge difference in performance:

%timeit 0.299 * x[:, 0] + 0.587 * x[:, 1] + 0.114 * x[:, 2]
7.66 ms ± 100 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit (x * f[None, :, None, None]).sum(1)
7.3 ms ± 154 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)