Reported this error while I was running, can you help me fix it
D:\Python3.8.6\lib\site-packages\torchvision\transforms\transforms.py:287: UserWarning:
Argument interpolation should be of type InterpolationMode instead of int. Please, use I
nterpolationMode enum.
warnings.warn(
generator parameters: 902346
discriminator parameters: 5215425
0%| | 0/13 [00:05<?, ?it/s]
Traceback (most recent call last):
File “train.py”, line 79, in
fake_img = netG(z)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _cal
l_impl
return forward_call(*input, **kwargs)
File “C:\Users\63288\Desktop\CF\SRGAN-master\model.py”, line 91, in forward
x = self.mct1(x)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _cal
l_impl
return forward_call(*input, **kwargs)
File “C:\Users\63288\Desktop\CF\SRGAN-master\model.py”, line 17, in forward
out1 = self.conv1(x)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _cal
l_impl
return forward_call(*input, **kwargs)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\conv.py”, line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\conv.py”, line 442, in _conv_f
orward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[64, 3, 22, 2
2] to have 64 channels, but got 3 channels instead
self.conv1 expects an input with 64 channels while your activation has 3 channels.
Check where this conv layer is used and either change its in_channels or make sure the expected input is passed to it.
Your netG contains a module called self.mct1, which then calls into self.conv1 with an input in the shape [64, 3, 22, 22] while it expects an input with 64 channels.
Here is a small code snippet showing the error:
# fails
conv1 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=1)
x = torch.randn(64, 3, 22, 22)
out = conv1(x)
# RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[64, 3, 22, 22] to have 64 channels, but got 3 channels instead
# works
conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=1)
x = torch.randn(64, 3, 22, 22)
out = conv1(x)
Still an error, I changed part of the code:
class MCT(nn.Module):
def init(self, channels):
super(MCT, self).init()
# 使用 3 个卷积层代替单个卷积层
self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=1, stride=1, padding=4)
self.conv2 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=4)
self.conv3 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, stride=1, padding=4)
Another error was reported:
UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use Interpola
tionMode enum.
warnings.warn(
generator parameters: 765706
discriminator parameters: 5215425
0%| | 0/13 [00:05<?, ?it/s]
Traceback (most recent call last):
File “train.py”, line 79, in
fake_img = netG(z)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _call_impl
return forward_call(*input, **kwargs)
File “C:\Users\63288\Desktop\CF\SRGAN-master\model.py”, line 92, in forward
x = self.mct1(x)
File “D:\Python3.8.6\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _call_impl
return forward_call(*input, **kwargs)
File “C:\Users\63288\Desktop\CF\SRGAN-master\model.py”, line 31, in forward
out = torch.cat((out1, out2, out3), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 30 but got size 28 for tensor number 1 in the list.
out = torch.cat((out1, out2, out3), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 30 but got size 28 for tensor number 1 in the list.
so make sure all tensors have the same shape except in dim1.