This image is a segmentation result obtained by a segmentation algorithm. And the original image is put into a neural network to get the feature vectors.
So my question is that I want to get the feature vectors of yellow and green regions.
Can you share a sample code?
But generally, in segmentation models, the outputs’ shape would be
[batch, classes, height, width] which based on your image, it is
[batch, 3, h, w]. So, you can get a feature map of for each class just by using an index for channel dimension such as
[batch, 0, h, w] for possibly black background.
Firstly, thanks your reply.
But the segmentation result is already owned. I just want to obtain the feature vectors of the interest of regions, which means that I want to obtain the feature vectors for the yellow wand green regions to do other tasks.
And My current idea is that the original image can be operated through the convolutional neural network to obtain feature maps, and then the feature vectors can be obtained through the position one-to-one correspondence of the segmentation result map. What do you think of this idea?