Given groups=1, weight[16, 1, 5, 5], so expected input[100, 3, 64, 64] to have 1 channels, but got 3 channels instead

Thanks for your patience. Iam using pytorch 0.4. Can you provide me the code to unsqueenze the tensor twice.
I have tried it like this,

iter=0
for epoch in range(num_epochs):
    for img, labels in train_loader:
        img=img.unsqueeze(0)
        img=img.unsqueeze(0)
        print(img.size())
        img=Variable(img)
        labels=Variable(labels)

It again shows the same error
This is the size of the tensor
torch.Size([1, 1, 100, 3, 64, 64])

Probably I was wrong.
The shape before unsqueeze looks like you are using a batch size of 100 and 3 channels for each image.
Are you loading the images using PIL?
What is the shape of gray if you use my short code snippet?

Iam using custom dataset, so I am using Data loader to iterate images. Shape of the gray image is still (64,64,3) when I read it using cv2.imread(path). When I load a image using PIL, how should I check the shape?..

You could use torchvision.transforms.functional.to_tensor(pil_image), which will return a normalized tensor of shape [1, h, w].
If you donā€™t want it to be normalized, you could use torch.from_numpy(np.array(pil_image)).

Thanks. I used torchvision.transforms.functional.to_tensor(gray_image).size()

Output:
torch.Size([1, 64, 64])

Perfect! Now you would have to add the batch dimension before passing it to the model using tensor = tensor.unsqueeze(0).

I tried this,

iter=0
for epoch in range(num_epochs):
    for img, labels in train_loader:
        img=torchvision.transforms.functional.to_tensor(img)
        print(img.size())
        img=Variable(img)
        labels=Variable(labels)

It shows ,

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-258-5fa2f20aa942> in <module>
      2 for epoch in range(num_epochs):
      3     for img, labels in train_loader:
----> 4         img=torchvision.transforms.functional.to_tensor(img)
      5         print(img.size())
      6         img=Variable(img)

~\Anaconda3\lib\site-packages\torchvision\transforms\functional.py in to_tensor(pic)
     42     """
     43     if not(_is_pil_image(pic) or _is_numpy_image(pic)):
---> 44         raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))
     45 
     46     if isinstance(pic, np.ndarray):

TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>

This is the only transformation I made ,

transform=transforms.Compose([ToTensor()])
train_data = ImageFolder(root=Root,transform=transform)

I tried your code on single image and it workded,

I=Image.open('train_set/1/2.png')
torchvision.transforms.functional.to_tensor(I).unsqueeze(0).size()

How should I use it in training code section?
I also tried ToPILImage() transformation separately but it also shows error
Can you please give a snippet of the code

torch.from_numpy(np.asarray(yourimage))
Would do the job

It works on single image. I opened a single image using PIL and used both torch.from_numpy(np.asarray(yourimage)) and torchvision.transforms.functional.to_tensor(I) and both are giving same shape (1,64,64) when I unsqueeze it. But when use you code in the training part like this,

iter=0
for epoch in range(num_epochs):
    for img, labels in train_loader:
        img=torchvision.transforms.functional.to_tensor(I).unsqueeze(0)
        print(img.size())
        img=Variable(img)

It shows ,
TypeError: pic should be PIL Image or ndarray. Got &lt;class 'torch.Tensor'&gt;

you already have image given by the train loader just use it. Do

image.shape

To see its shape, it should be a tensor already, unsqueeze its dimension 0 to get [1,1,h,w] and you are good to bass it to your network.

iter=0
for epoch in range(num_epochs):
    for img, labels in train_loader:
        print(img.shape)
        img=img.unsqueeze(0)
        print(img.shape)
        img=Variable(img)
        labels=Variable(labels)

Output:

torch.Size([100, 3, 64, 64])
torch.Size([1, 100, 3, 64, 64])

You still donā€™t open your images correctly in your train loader, and 100 is your batch size, if you already have a batch you need to have a shape of [100,1,h,w] because your gray image should comes out as [h,w]

Please give me a code snippet of how to do it.

https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
Use the tutorial and adapt it to open images using the correct gray scale, and reshape them correctly in get_item.

A gray scale image open by PIL is of the shape [h,w], if you unsqueeze it to be [1,h,w], and let your dataloader to create batches you will have : [B,1,h,w] with B the batch size, h your height and w you width.

Thanks. I tried this,

iter=0
for epoch in range(num_epochs):
    for labels, img in enumerate(train_loader):
        img=np.asarray(img[0])
        print(img.shape)
        print(type(img))
        img=torch.from_numpy(np.asarray(img))
        labels=torch.from_numpy(np.asarray(labels))
        print(img.shape)
        img=Variable(img)
        labels=Variable(labels)

The shape is still (64,64,3)ā€¦I dont know what to do. I open a image and try your code and then the shape is (1,64,64)

I told you, your trainloader is opening your images using 3 channels. You have to change thatā€¦ You need to understand how Tensors work and how the train loader is loading your data.

Thanks for your patience. I solved the problem by

iter=0
for epoch in range(num_epochs):
    for i,(img, labels) in enumerate(train_loader):
        img,labels=(img, labels)
        img=img.narrow(1,0,1)
        img=Variable(img)

Now the shape became (100,1,64,64)ā€¦Training is going on now. @ptrblck thanks for your patience

2 Likes

you could just do :

for img, labels in train_loader:
img=img.narrow(1,0,1)
img=Variable(img)

Thanks. Will use it next time. Actually all the 3 channels were gray scale and I wanted only one channel for networkā€™s input.

You could have done something simpler using a Dataset class and dataloader, to get directly a 1 channel input, but anyway if it works for you, no need to complicate things.