RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4

Hi @ptrblck

Thank you ! Two questions:

  1. In one of your past posts here :

No, for multi-class classification (one target class for each sample), the targets should hold the class indices. Other frameworks often use one-hot encoded target vectors, which is not necessary in PyTorch. Have a look at the docs for more information.

I was wondering if the vectors need to be one-hot encoded or not?

  1. When you say Assuming your current target is one-hot encoded in the channel dimension, i.e. it uses a 1 for the “active” class in that channel while all other channels contain zeros, how can I encode the data for three different classes.

Currently my target (masks in the code snippet in question) looks like this:

#print(masks)
tensor([[[[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 1.],
…,
[1., 1., 0., …, 0., 0., 1.],
[1., 0., 1., …, 0., 0., 0.],
[1., 1., 0., …, 0., 0., 0.]],
[[1., 1., 1., …, 0., 0., 0.],
[1., 1., 1., …, 0., 0., 0.],
[1., 1., 1., …, 0., 0., 0.],
…,
[0., 0., 0., …, 1., 1., 0.],
[0., 0., 0., …, 1., 1., 1.],
[0., 0., 1., …, 1., 1., 1.]]]])

Also, the shape of X , y and 'masks` respectively are as below:

(1, 5, 256, 256)
(1, 3, 256, 256)
torch.Size([1, 3, 256, 256])