RGB to grayscale shape mismatch

I am augmenting my images and part of this process involves using an rgb2gray filter as below:

def rgb2gray(rgb):
  return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])

The problem is the original image has shape (64,64,3) while the new image has shape (64,64).This is sadly problematic given that the size of the neural net is 64643 and I cannot change it.Is there anyway to work around this problem?

You can duplicate the gray image on each channel.

I think you could also use transforms.Grayscale(num_output_channels=3) and it would do what @ebarsoum said and duplicate it on each channel so r == g == b.