TypeError: conv2d() received an invalid combination of arguments

I’m trying to replicate a simple resnet block.

import torch.nn as nn

class classifier(nn.Module):
    def __init__(self):
        super(classifier,self).__init__()
        self.res1 = self.res_block(1,8)

    def res_block(self,in_channels,out_channels):
        return  nn.Sequential(
            nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3),#, padding=1, stride=1, bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(),
            nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3),#, padding=1, stride=1, bias=False),
            nn.BatchNorm2d(out_channels))
    def forward(self, x):
        x = self.res1(x)
        # print(x.shape)
        return x

model = classifier().to(device)
print(model)
random = torch.rand(1,1,28,28).to(device)
random = torch.tensor(random).double
model(random)

However, it fails to perform a forward pass. What did I miss? Thanks in advance.


classifier(
  (res1): Sequential(
    (0): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1))
    (1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU()
    (3): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1))
    (4): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
/home/neuronics/Desktop/digit_recognizier/main.py:68: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  random = torch.tensor(random).double
Traceback (most recent call last):
  File "/home/neuronics/Desktop/digit_recognizier/main.py", line 69, in <module>
    model(random)
  File "/home/neuronics/anaconda3/envs/pyt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/neuronics/Desktop/digit_recognizier/main.py", line 61, in forward
    x = self.res1(x)
  File "/home/neuronics/anaconda3/envs/pyt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/neuronics/anaconda3/envs/pyt/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/neuronics/anaconda3/envs/pyt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/neuronics/anaconda3/envs/pyt/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 443, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/neuronics/anaconda3/envs/pyt/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 439, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (builtin_function_or_method, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (builtin_function_or_method, Parameter, Parameter, tuple, tuple, tuple, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (builtin_function_or_method, Parameter, Parameter, tuple, tuple, tuple, int)


Process finished with exit code 1

I think there are a few issues here. You probably want to do random = torch.tensor(random).double() rather than random = torch.tensor(random).double. However, by default the model is in single precision so the convolution layer will expect a float tensor so you probably want random = torch.tensor(random).float().

Additionally, the in_channels parameter for the second conv layer is incorrect. It needs to match the number of output channels to the first layer so the second layer should probably be nn.Conv2d(in_channels=out_channels, out_channels=out_channels, kernel_size=3),#, padding=1, stride=1, bias=False),.

1 Like