I check the documentation of the nn.CrossEntropyLoss and I find that for the target, the dimension is N H W, so in my case, I use a RGB mask, which doesn’t work.
When I change my target dataset to grayscale images., it works fine.
I would like to know if I want to use a RGB image as target, which loss should I use instead?
Thank you
nn.CrossEntropyLoss is usually used for classification use cases.
It seems you are trying to reconstruct the image somehow. If your targets are normalized tensors with values in [0, 1], you could use nn.BCELoss.
Thanks for the info.
In that case I would stick to a classification criterion.
Could you post a sample mask with its values?
Usually your mask should have the shape [batch_size, h, w] and contain the class indices for each pixel.
in fact, my sample mask have the shapebatch_size, Channel, h, w, the same size as the sample image. so the problem is the channel here. because my sample mask is a RGB image. In this case, which Loss Function should I use?
I guess your mask has some kind of color code in RGB, e.g. red ([255, 0, 0]) means car, while blue ([0, 0, 255]) means building.
If that’s the case, could you post a sample image or post the mapping directly, since we would need to transform the RGB color-coded mask into a class index mask.
hi, thanks for replying.
I have two types of mask image. One is in RGB, the other is not.
Now I encounter a new problem, do I need to normalize my image with values in [0,1]
because I have tried to build a Unet, and every time after training, the predicted mask is always a red image.
Do you know where possibly is the problem?
Your input can and should probably be normalized to properly train the model.
The mask however, should most likely not be normalized, as it contains some kind of class information.
How is the other mask stored if not as RGB? Does it store the class indices directly for each pixel?
Could you post some values and the shape of an example mask you are currently using?
Thanks for the example.
Are you using these RGB values directly for your classes?
If so, I would recommend to use a mapping such that each pixel contains only a class index in the range [0, nb_classes-1].
I mean something like a key value pair between your color codes in RGB and the corresponding class index.
E.g. [128, 64, 128] would map to class0.
Have a look at this post for another example using grayscale images.
Could you post the classes for each separate color in your segmentation mask?
Thank you for your explanation, I’m using the KITTI semantic segmentation datasets. which are conform with The Cityscapes Dataset, it has 30 classes. But I didn’t find the key value pair between my colour codes and the corresponding class index.
I have seen the post you mentioned, and I don’t know the class mapping value neither, could you help?
I’m not sure where to find the mapping. Here it seems a mapping is given for 11 classes.
However, if you can’t find the right mapping, you could also just get all unique color codes and create your own mapping.
Thanks a lot for the effort you put. I really appreciate it. At the moment I am getting CUDA error: out of memory. I will sort out memory issue and will update you.
In the unet example, if I do the image mapping, for example, if I have 10 classes for the labels, the last layer of the network is log_softmax, does it mean that the output of the network is the probability map of of every pixel.
If so, after using the NLLLoss and the Adam optimisation, the weight in the network is optimized, now I feed the network with a random training image, the output is a probability map, if I want to visualize it, I need to simply imshow(output) or I have to remap it back?
Yes, you will get the log probabilities with each channel corresponding to the class index.
plt.imshow should work. I would try to transform the output using torch.exp to get the probabilities in the range [0, 1], since the colormap might look more “natural”.
Hi, I’m wondering how to create the mapping, I saw that your mapping code in the post, how did you find the mapping relationship?
And how to create my own mapping? Do you mean that I should check all color in my mask image and then find the html code