Mapping the Label Image to Class Index For Semantic Segmentation

I have some doubts in mapping colors to class index
I have label images (raw pixel values ranging from 0 to 1) and visually there are three classes (black , green, red color). I want to create masks from these label images to feed it to my Segmentation model (which uses cross entropy loss).

After looking at some code from a forum post

Create mapping
Get color codes for dataset (maybe you would have to use more than a single
image, if it doesn’t contain all classes)

target = torch.from_numpy(target)
colors = torch.unique(target.view(-1, target.size(2)), dim=0).numpy()

implementing this i got colors to have a shape (7824, 3) meaning that there are 7824 different colors right ?

could i have some guidance on how to use this to create masks for all my label image which has only the class index (black class 0, green class 1, red class 2)

I assume you are referring to this post.

Try to adapt the code to your use case and make sure you are dealing with the same data shapes (i.e. check for the same dimension layout etc.).
One possible reason for a lot of unique colors could be the usage of an interpolation method while resizing other than nearest neighbors.

Thank you @ptrblck. I will give this a try !