RuntimeError: Given groups=1, weight of size 256 512 1 1, expected input[8, 256, 32, 32] to have 512 channels, but got 256 channels instead

I am receiving this error while trying to run my script.
Can anyone help?

Traceback (most recent call last):
File “train.py”, line 724, in
run_training(model, trainCases, epoch, lp, max_image_shape)
File “train.py”, line 343, in run_training
output = model(mris_batch_tensor)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/content/drive/My Drive/MRI to SCT/models.py”, line 65, in forward
upconcat2_feat = self.upconcat2(conv_up1_2_3_feat, conv_down5_6_7_feat)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/content/drive/My Drive/MRI to SCT/models.py”, line 211, in forward
x_conv = self.W_x(x)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py”, line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 256 512 1 1, expected input[8, 256, 32, 32] to have 512 channels, but got 256 channels instead

Please let me know your self.W_x and dimension of x

class UpConcat_ATT(nn.Module):
def init(self, in_feat, out_feat):
super(UpConcat_ATT, self).init()

    self.W_x = nn.Sequential(
        nn.Conv2d(in_feat, out_feat, kernel_size=1, stride=2, padding=0, bias=True),
        nn.BatchNorm2d(out_feat)
    )

    self.W_g = nn.Sequential(
        nn.Conv2d(in_feat, out_feat, kernel_size=1, stride=1, padding=0, bias=True),
        nn.BatchNorm2d(out_feat)
    )

    self.relu = nn.ReLU(inplace=True)

    self.upsample = nn.Sequential(
        nn.ConvTranspose2d(in_feat, out_feat, kernel_size=3, padding=1, stride=1, dilation=1, output_padding=0),
        nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True)
    )


def forward(self, x, g):
    x_conv = self.W_x(x)
    g_conv = self.W_g(g)

    summed = x_conv + g_conv
    summed_relu = self.relu(summed)
    summed_relu_sigm = torch.sigmoid(summed_relu)

    to_be_multiplied = self.upsample(summed_relu_sigm)
    x_gated = to_be_multiplied * x
    g_up = self.upsample(g)

    out = torch.cat([g_up, x_gated], 1)
    return out

It’s the attention gate, I am trying to implement on U-net model.

I dk the dimension of x but what about increase the number of channel of x?

The popular method of doing that is 1x1 conv

The message you got is ‘x should have 512 channels but currently 256’