Error while running Encoder -- "TypeError: conv2d() received an invalid combination of arguments"

Hi there,
While running the “Encoder” code, it is throwing the given error. May I request if anyone can help me with this? I am not getting why the tensor output of Conv2d_layer1 is not considered as a Tensor by layer2?
(Input to Layer1: torch.Size([5, 3, 316, 46])
Output of Layer1: torch.Size([5, 64, 158, 23]))

class Encoder(nn.Module):

    def __init__(self, input_channel, hid_dim, emb_dim, n_layers, dropout, device):
        super(Encoder, self).__init__()

        self.device = device
        self.scale = torch.sqrt(torch.FloatTensor([0.5])).to(self.device)
        self.hid_dim = hid_dim
        self.conv_layer1 = nn.Conv2d(input_channel, 64, kernel_size=(3,3), stride=(1,1), padding =(1,1))
        self.conv_layer2 = nn.Conv2d(64, 128, kernel_size=(3,3), stride=(1,1), padding =(1,1))
        self.maxpool = nn.MaxPool2d(kernel_size=(2,2), stride=(2,2))
        self.emb = nn.Embedding(128, emb_dim)
        self.dropout = nn.Dropout(dropout)

    def forward(self, src):
        # img = [batch, Cin, W, H]
        batch = src.shape[0]
        C_in = src.shape[1]

        # src = [batch, Cin, w, h]
        # layer 1
        src = self.maxpool(F.relu(self.conv_layer1(src), inplace=True))
        # layer 2
        encoder_output = self.maxpool(F.relu(self.conv_layer2(src), inplace=True))

The error I am getting:

Traceback (most recent call last):
  File "main_img2seq.py", line 80, in <module>
    train_loss = train(model, train_iter, optimizer, criterion, CLIP, device)
  File "/Seq2Seq-docker/train.py", line 28, in train
    output = model(src, trg, True, 0.5)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Seq2Seq-docker/model/opennmt.py", line 159, in forward
    enc_output, hidden, cell = self.encoder(src)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Seq2Seq-docker/model/opennmt.py", line 52, in forward
    src = F.relu(self.batch_norm1(self.conv_layer3(src))),
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 446, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (tuple, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (!tuple!, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (!tuple!, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)

Your code works fine:

enc = Encoder(1, 1, 1, 1, 0.5, 'cpu')
x = torch.randn(1, 1, 24, 24)
out = enc(x)
out.shape
# > torch.Size([1, 128, 6, 6])

Thanks! I found the mistake. It was “,” at one place.