Annotating Semantic Segmentation Pytorch

I am trying to follow Pytorch Semantic Segmentation documentation on my own dataset (link to documentation: TorchVision Object Detection Finetuning Tutorial — PyTorch Tutorials 1.8.1+cu102 documentation). When using the PennDataset that the tutorial uses, I get a functioning model. I’ve made two datasets of completely different images for two different applications. I annotated the first set of images myself using Labelme, and the model functioned correctly. But when I annotated the second dataset the same way I get the error: ‘ValueError: zero-size array to reduction operation minimum which has no identity’. I am using the exact same model file for all three datasets, so I know it’s not the model that is wrong; it must be the data. I changed the masks to have pixel values equal to the identity of the object in the image (1 for first object, 2 for second, etc). I cant figure out why the same model works for two datasets but not the third. The annotation process was very long, so is there a simpler way to annotate the images with only a few steps?

Solution: I needed to convert the mask to “L” mode using .convert(“L”).

1 Like