Hello everyone,
I’m building a CNN model in PyTorch for Image Super Resolution. After applying the model to the low resolution image I want to validate only a small part of the image against a high resolution image, meaning that I have to post process the model output (shape 100x100) so that it is only an array (shape 50).
So basically here are my steps:
mse_loss = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=params[“learning_rate”], weight_decay=params[“weight_decay”])
pred = model(input) #pred.shape: (1,1,100,100)
pred_array = CalculatePredArray(pred) #pred_array.shape: (1,1,50), CalculatePredArray own function
loss = mse_loss(pred_array, target_array) #target array has also shape (1,1,50)
loss.backward()
optimizer.step()
Error message: RuntimeError: The size of tensor a (100) must match the size of tensor b (50) at non-singleton dimension 3
My model only consists of nn.Conv2d and nn.ReLU layers, so it is independent of any input or output shape. I’m also not using any flattening or similar.
It should be possible to train the model with a loss of different shape, right? The shape shouldn’t matter, as the model only consists of kernels. But I don’t know how to write the according code for that and didn’t find anything that solved my problem online.
Does anyone know how to implement that?
Thank you very much in advance!
Best regards,
Johannes