The shape mismatch is raised in nn.NLLLoss so check the docs and make sure the model output and target have the expected shapes.
I would guess you might need to squeezedim1 in the target but since you didn’t explain the use case it’s just my guess.
For my use case, I am trying to use UNet for the depth estimation task. And now I understand, my label is an image size 256x256, and my model is only outputting a single label. How could I change the last layer of UNet to output a result that matches my label?