I am new to model and deep learning training , In my training section i am trying to find the loss of the segmentation of the image , so in here before cross entropy loss calculation i have used interpolate to downsize the image which came out of the model,

Here the size of the image from the model which came out is (5,36,180,320)

here the 5 is the batch size , 36 is the channel size and the 180, 320 is the h x w

The target image shape is (5,1,180,320)

the 5 is the batch size , 1 is the channel , 180 x 320 is the h x w

the target image has 1 channel value from 0 to 35 which is number of available segmenatation

Now i am downsizing to the image to (1,180,320)

so it can match the target image size but the issue in here is

i used it like this

```
outs = F.interpolate(out, target.size()[1:] , mode = 'bilinear' , aligin_corners = False).squeeze(dim = 1 )
# out -> is the image
# size is mentioning (1,180,320)
crit_loss = crit(outs , target.squeeze(dim=1))
# here crit is cross entropy loss
loss += (loss_coeff * crit_loss)
```

i get the error

Input and the output must have the same number of spatial dimentions, but got input with spatial dimentions of [180,320] and output size of torch.Size([1,180,320]).please provide input tensor

I have tried all the the ways that i came up with nothing worked ,

if i use the sizr as (180,320) inside the interpolate i get the shape outs as 36,180,320 which cause issue at the crit what should i do where it does went wrong, please help me