RuntimeError: expected input to have 3 channels, but got 4 channels instead

my medical PNG images for test have 3 channels as given below:

import cv2
from google.colab.patches import cv2_imshow
img= cv2.imread("a.png")
print('Image Dimensions :', img.shape)
img= cv2.imread("ax2.png")
print('Image Dimensions :', img.shape)

---------------------> results : <--------------------------------

Image Dimensions : (625, 698, 3)
Image Dimensions : (426, 535, 3)

As it is known, my test images have 3 channels, but I got an error as follows, which says that the images have 4 channels

RuntimeError: Given groups=1, weight of size [3, 3, 1, 1], expected input[1, 4, 268, 300] to have 3 channels, but got 4 channels instead

What is the problem and how can I fix it?

thanks!

Could you give more context? Where the issue has occurred at? Do you have any transformation? And any other relevant information to help you solve this problem.

first i run a super resolution algorithm with its dataset and that was ok. the link of the code is in the following : github.com/sanghyun-son/EDSR-PyTorch. (What this code does is that it receives an image with two modes of high and low resolution and improves the quality of the image with low resolution image and finally compares the improved image with the image with high resolution to check the quality of the improvement. does So, the input images are two high and low resolution images from the same photo.) After that i tried to used my PNG medical dataset to test but got this error.

can you please guide me, which part of the code should I show you so that you can guide me better?

Traceback (most recent call last):
  File "main.py", line 33, in <module>
    main()
  File "main.py", line 26, in main
    while not t.terminate():
  File "/content/EDSR-PyTorch/src/trainer.py", line 141, in terminate
    self.test()
  File "/content/EDSR-PyTorch/src/trainer.py", line 91, in test
    sr = self.model(lr, idx_scale)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/EDSR-PyTorch/src/model/__init__.py", line 55, in forward
    return self.forward_x8(x, forward_function=forward_function)
  File "/content/EDSR-PyTorch/src/model/__init__.py", line 190, in forward_x8
    y = forward_function(*x)
  File "/content/EDSR-PyTorch/src/model/edsr.py", line 56, in forward
    x = self.sub_mean(x)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 454, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [3, 3, 1, 1], expected input[1, 4, 268, 300] to have 3 channels, but got 4 channels instead

And what other information is needed from my data? I really need help, thank you for your help.

It seems that this issue is known. Please follow this issue: RuntimeError: Given groups=1, weight of size 3 3 1 1, expected input[1, 4, 678, 1020] to have 3 channels, but got 4 channels instead · Issue #166 · sanghyun-son/EDSR-PyTorch · GitHub

oh I had not seen this. thanks a lot.
I did what he wrote but got new error:
RuntimeError: The size of tensor a (1070) must match the size of tensor b (698) at non-singleton dimension 3.
you know what should i do for this one?

It seems that this issue shows a similar error, Bug in testing Set5 X3 with `args.chop=True` · Issue #223 · sanghyun-son/EDSR-PyTorch · GitHub

When you use someone else’s code on GitHub, search for your problem in the GitHub repo because it is most likely that you aren’t the only person who encounters this problem. Good luck!

I could not solve the problem with this solution, but I am very grateful for your guidance. I will try to find a solution that works for me with the experiences presented. Thanks a lot

1 Like