I am receiving this error while trying to run my script.
Can anyone help?
Traceback (most recent call last):
File “train.py”, line 724, in
run_training(model, trainCases, epoch, lp, max_image_shape)
File “train.py”, line 343, in run_training
output = model(mris_batch_tensor)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/content/drive/My Drive/MRI to SCT/models.py”, line 65, in forward
upconcat2_feat = self.upconcat2(conv_up1_2_3_feat, conv_down5_6_7_feat)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/content/drive/My Drive/MRI to SCT/models.py”, line 211, in forward
x_conv = self.W_x(x)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py”, line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 256 512 1 1, expected input[8, 256, 32, 32] to have 512 channels, but got 256 channels instead