Usage of a calculated loss of different shape than model output for backpropagation

Hello everyone,

I’m building a CNN model in PyTorch for Image Super Resolution. After applying the model to the low resolution image I want to validate only a small part of the image against a high resolution image, meaning that I have to post process the model output (shape 100x100) so that it is only an array (shape 50).
So basically here are my steps:

mse_loss = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=params[“learning_rate”], weight_decay=params[“weight_decay”])
pred = model(input) #pred.shape: (1,1,100,100)
pred_array = CalculatePredArray(pred) #pred_array.shape: (1,1,50), CalculatePredArray own function
loss = mse_loss(pred_array, target_array) #target array has also shape (1,1,50)

Error message: RuntimeError: The size of tensor a (100) must match the size of tensor b (50) at non-singleton dimension 3

My model only consists of nn.Conv2d and nn.ReLU layers, so it is independent of any input or output shape. I’m also not using any flattening or similar.

It should be possible to train the model with a loss of different shape, right? The shape shouldn’t matter, as the model only consists of kernels. But I don’t know how to write the according code for that and didn’t find anything that solved my problem online.

Does anyone know how to implement that?

Thank you very much in advance!

Best regards,

Your code should work as seen here:

model = nn.Conv2d(3, 1, 3, 1, 1)
input = torch.randn(1, 3, 100, 100)

pred = model(input)
pred_array = pred[:, :, :50, 0]
> torch.Size([1, 1, 50])
loss = F.mse_loss(pred_array, torch.rand_like(pred_array))

and the error message indicates that pred_array does not have the posted shape but has a shpe of 100 in dim3.

1 Like

It’s working! Thank you A LOT :slight_smile:
My mistake was to use nn.MSELoss instead of F.mse_loss. That was very helpful!