Ground truth transform for image

Hello, i am new to Pytorch and i would like to use Pytorch for road segmentation. For image segmentation, do i need to transform the Ground Truth or label image with ToTensor() so it become 4 dimensions?, or just transform it to torch with long type value ? thanks for reply

If you are using e.g. nn.CrossEntropyLoss as your criterion, your label image should contain the class indices and have the shape [batch_size, height, width]. ToTensor() should not be called, since this would normalize the values to the range [0, 1].

Hey. thanks for your reply. So i followed your suggestion and my ground truth contains pixel with 0 and 255. i got an error in criterion steps

ValueError: Target size (torch.Size([1, 224, 224])) must be the same as input size (torch.Size([1, 2, 224, 224]))

does that mean that i have to transform my input image to Grayscale first ? because i notice the first dimension of RGB (3) became (2) in training steps .

Which criterion are you using?
The shape should be fine for a two class classification problem using nn.CrossEntropyLoss.
However, your target should contain the class indices in the range [0, nb_classes-1], so [0, 1] in your case.