RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 48 but got size 47 for tensor number 1 in the list

Hi, I am training 181 images with .png format for the purpose of image registration.
and received this error:

File “/project/med/Hassan_Ghavidel/transformer_target_localization/code/main_train_unsup_TransMorph.py”, line 228, in
train_val_test.train_val_model_unsupervised(model, train_loader, optimizer, config.loss_name, config.loss_weights,
File “/project/med/Hassan_Ghavidel/transformer_target_localization/code/auxiliary/train_val_test.py”, line 61, in train_val_model_unsupervised
train_outputs, train_ddf = model(torch.cat((train_inputs, train_targets), dim=1))
File “/project/med/Hassan_Ghavidel/TransMorph_Transformer_for_Medical_Image_Registration/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1194, in _call_impl
return forward_call(*input, **kwargs)
File “/project/med/Hassan_Ghavidel/transformer_target_localization/code/models/TransMorph.py”, line 838, in forward
out = self.up1(out, f2)
File “/project/med/Hassan_Ghavidel/TransMorph_Transformer_for_Medical_Image_Registration/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1194, in _call_impl
return forward_call(*input, **kwargs)
File “/project/med/Hassan_Ghavidel/transformer_target_localization/code/models/TransMorph.py”, line 707, in forward
x = torch.cat([x, skip], dim=1)
File “/project/med/Hassan_Ghavidel/TransMorph_Transformer_for_Medical_Image_Registration/myenv/lib/python3.8/site-packages/monai/data/meta_tensor.py”, line 282, in torch_function
ret = super().torch_function(func, types, args, kwargs)
File “/project/med/Hassan_Ghavidel/TransMorph_Transformer_for_Medical_Image_Registration/myenv/lib/python3.8/site-packages/torch/_tensor.py”, line 1279, in torch_function
ret = func(*args, *kwargs)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 48 but got size 47 for tensor number 1 in the list.

the line that code is complaining is
x = torch.cat([x, skip], dim=1)
located in the following picture:

May you give me a suggestion to solve this issue?

Hi Hassan,
it seems your two tensors x and skip do not match up in their dimensions/shape.
You can check those in the debugger by hovering over their variable names. From the error message I suspect that x has a shape of [48, n, ...] and skip has a shape of [47, m, ...]. When you cat two tensors along dimensions 1, that means all other dimensions (specifically dimension 0) need to match in size, e.g. skip should also have shape [48, m, ...] such that the resulting tensor has shape [48, n+m, ...].
Perhaps you want to share the shapes of x and skip (and perhaps also from x before the self.up function call) in case you have still difficulties in understanding the problem.

1 Like

Thanks OEtzi for your response.
in main_train_unsup_TransMorph.py script i had written a class to make all input images with same channels: this is the function:

class EnsureSingleChannel:
    def __init__(self, keys):
        self.keys = keys
    def __call__(self, data):
        for key in self.keys:
            img = data[key]
            if not isinstance(img, torch.Tensor):
                img = torch.tensor(img)
            if img.shape[0] > 1:
                img = img[0:1]
            elif img.shape[0] < 1:
                raise ValueError("Image has less than 1 channel, cannot be processed")
            data[key] = img
        return data

then i used it here:

train_transforms = Compose(
    [
        LoadImaged(keys=["fixed_image", "moving_image"]),
        EnsureChannelFirstd(keys=("fixed_image", "moving_image")),
        EnsureSingleChannel(keys=["fixed_image", "moving_image"]),
        ScaleIntensityRanged(
            keys=["fixed_image", "moving_image"],
            a_min=0,
            a_max=1000,
            b_min=0.0,
            b_max=1.0,
            clip=False,
        ),

if i dont use it i will receive imges with two different channels
RuntimeError: Given groups=1, weight of size [48, 2, 3, 3], expected input[2, 4, 185, 185] to have 2 channels, but got 4 channels instead
this EnsureSingleChannel function worked for some other input images that some of them were greyscales but for new set of images i receive : RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 48 but got size 47 for tensor number 1 in the list. error

i am thinking maybe i should define a different function for these new images…
or do you think i should try to manually make x and skip tensors same size?

Double post from here with follow up.