Hi, I’m building a CNN model with mobilenet’s layers as the encoder. The decoder part is intended to output depth data from image but at a smaller size, let’s say w, h = 30, 30. However I have target image data of a much bigger size (640 x 480), so at the moment of comparing both of them in the loss function I’m not quite sure how to approach this mismatch or difference in the size.
One approach could be resizing the output data (from 30 x 30 to 640 x 480) and comparing both… but doesn’t it going to impact on the error/accuracy?
Another approach that I find could be to resize the target to the smaller output size (640 x 480 to 30 x 30) but I’m not quite sure on that.
What should be the approach to perform a correct computing of loss and/or which one (target our output) should be resized in this case?