Converting RGB into Grayscale

Hello. I have 3 folders with images subfolders as train, test and validate. I have used T.Grayscale(1) to convert into grayscale but when I show the image , its like this. I dont understand why? Here is the image which Im getting

my code is here:
import matplotlib.pyplot as plt
import torch.nn.functional as F
import torch
import numpy as np

def show_image(image,label):

image = image.permute(1,2,0)
plt.imshow(image)
plt.title(label)

def show_grid(image,title = None):

image = image.permute(1,2,0)


plt.figure(figsize=[15, 15])
plt.imshow(image)
if title != None:
    plt.title(title)

def accuracy(y_pred,y_true):
y_pred = F.softmax(y_pred,dim = 1)
top_p,top_class = y_pred.topk(1,dim = 1)
equals = top_class == y_true.view(*top_class.shape)
return torch.mean(equals.type(torch.FloatTensor))

def view_classify(image,ps,label):

class_name = ['F', 'M' ,'N', 'S', 'Q','V']
classes = np.array(class_name)

ps = ps.cpu().data.numpy().squeeze()

image = image.permute(1,2,0)

fig, (ax1, ax2) = plt.subplots(figsize=(8,12), ncols=2)
ax1.imshow(img)
ax1.set_title('Ground Truth : {}'.format(class_name[label]))
ax1.axis('off')
ax2.barh(classes, ps)
ax2.set_aspect(0.1)
ax2.set_yticks(classes)
ax2.set_yticklabels(classes)
ax2.set_title('Predicted Class')
ax2.set_xlim(0, 1.1)

plt.tight_layout()

return None

train_transform = T.Compose([

                         T.Resize(size=(CFG.img_size,CFG.img_size)), 
                         T.ToTensor(), 
                         T.Grayscale(num_output_channels=1)

])

validate_transform = T.Compose([

                         T.Resize(size=(CFG.img_size,CFG.img_size)), 
                        
                         T.ToTensor(),
                         T.Grayscale(num_output_channels=1)

])

test_transform = T.Compose([

                         T.Resize(size=(CFG.img_size,CFG.img_size)), # Resizing the image to be 32 by 22
                         
                         T.ToTensor(), 
                         T.Grayscale(num_output_channels=1)

])


The original images are like thid. But above output imge gets blurry as well. I understand i changed the size like 32x32 from original size 438x288 so maybe thats the reason it is blurry.

matplotlib.pyplot.plot uses the viridis colormap by default so you would need to use cmap="gray" to show the image in grayscale:

img = np.random.randint(0, 256, (32, 32)).astype(np.uint8)
plt.imshow(img)

plt.imshow(img, cmap="gray")

Yes, resizing the image to 32x32 pixels will create these artifacts.

Thankyou so much. This worked for me. I have trained my model, saved the weights and tested now Im trying to use view_classify from a tutorial but it is giving me errors like invalid shape for image data. What can I do? I searched it here people said copy the helper.py and add i dont know where is helper.py as I just used the tutorial but nothing is mentioned there. Someone on the forum pasted this code:
import matplotlib.pyplot as plt
import numpy as np

def view_classify(img, ps, version=“MNIST”):
‘’’ Function for viewing an image and it’s predicted classes.
‘’’
ps = ps.data.numpy().squeeze()

fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2)
ax1.imshow(img.resize_(1, 28, 28).numpy().squeeze())
ax1.axis('off')
ax2.barh(np.arange(10), ps)
ax2.set_aspect(0.1)
ax2.set_yticks(np.arange(10))
if version == "MNIST":
    ax2.set_yticklabels(np.arange(10))
elif version == "Fashion":
    ax2.set_yticklabels(['T-shirt/top',
                        'Trouser',
                        'Pullover',
                        'Dress',
                        'Coat',
                        'Sandal',
                        'Shirt',
                        'Sneaker',
                        'Bag',
                        'Ankle Boot'], size='small');
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)

plt.tight_layout()

I saved it as view_classify.ipynb and uploaded to colab. like this:

And I got error in the image above. Plz tell how to fix asap. My dataset is from kaggle ecg data images and 32x32x1 CHANNEL.

Train an EfficientNet Model in PyTorch for Medical Diagnosis | by Yaokun Lin @ MachineLearningQuickNotes | Geek Culture | Medium This is the tutorial that Im following.

I would guess channels-last data layout is expected so permute your tensor into [height, width, channels] and it should work.