How to select pixels of ROI from feature map

Yep it’s a makeshift approach for now to just mask out all pixels that are not in my quadrilateral.
But the aim is to select only the pixels belonging to the quadrilateral roi’s and then pass those to the f.interpolate().
Anyway to do that ? kind of what tf.crop_and_resize does. but for floating point coordinates for a region defined by 8 coordinates and not 4 corners of a horizontal box.
Basically a perspective transform on the feature map, but with an operation that is differentiable.

I think in that case grid_sample might be a useful function, which uses pixel locations from the specified grid.

1 Like

Hello @ptrblck,
Please I am facing the same problem, I have 5 points that are produced from simple NN and I want to get the area inside them to generate a mask to be able to apply a segmentation loss on the mask.
I implemented it using cv2.fillploy after converting the 5 points to numpy but unfortunately, I lost the gradients.

I thought about your comment to use grid_sample but unfortunately, I don’t know how could I generate the mask with respect to the 5 points without losing the gradient tracing

My main problem is that, I don’t have the all desired position to create my grid, and If I use fillpoly to get the positions then I will loss the gradient trace.

And suggestions?

Usually you would create the target mask before the training so that you wouldn’t need gradients for this operation.
Could you explain your use case a bit and how you would like to use these gradients for the mask?

1 Like

First of all thank you for your fast reply.

Right, For the GT There is no need but my network generate for example 4 point which represent rotated box each point represented by (x,y) then I need to convert the network output to be mask as well to be ale to apply segmentation loss on both the predicted mask (after converting predicted point to mask and the GT mask)

I tried to convert the predicted points to mask using cv2.fillpoly but the loss wasn’t change during training, after investigation I realized that this behaviour because I loss the gredient trace when I convert the predections to numpy to use fillpoly, so I need another way to achieve that to be able to backprop on it

I hope this time I was clear enough

I’m unsure, but take a look at this approach and see, if you could reuse it.
If I understand it correctly, you are dealing with mask targets, but your model outputs just coordinates so you would want to create a mask using these coordinates?

1 Like

You are completely right, but unfortunate the link will not help me as it is talking about boxes, I think it is easy to generate mask from the coordinates if they will shape a box

But my case the coordinates will create an arbitrary shape for example a star shape so this approach which is prposed in this link will not help me, I need to use something like cv2.fillpoly but in pytorch to keep tracing gradient.

1 Like

I see. You could try to check the source code of fillpoly and see, if you could either port it directly to PyTorch, so that Autograd would create the backward pass automatically for you, implement the backward function manually (if possible at all), or alternatively let your model output the mask directly (might be the easiest approach).

1 Like

Hello, I’m sorry to bother you.
May I ask you a question?
My question is that I have a feature vectors that operate a original image through a CNN and I also have a segmentation result. But I want to get the feature vectors of the yellow and green regions.
The segmentation result is shown.
2007_000063

How would these feature vectors be defined for these regions?
I.e. would you like to get a specific activation in a previous layer or would you like to process the output in a special way?

The feature vectors means that feature maps.
I want to process the output in a special way.
In other words, I want to the feature maps of the yellow and green regions.

To get the activation maps you could use forward hooks as described here.
However, the pixel locations of these maps might not correspond to the output locations and it depends on the architecture of your model.
E.g. convolution layers will use filters with a specific window, stride, dilation, etc., so that you would have to calculate the receptive field of the output locations for each activation map.

If I interpolate the feature maps with bilinear into the same size as the original image, and then use the pixel position to correspond one to one, is it ok?

No, this still wouldn’t work, as each output pixel position might be calculated by a larger field of the input activation(s).
If all your convolutions use a 1x1 kernel, the output pixel locations would correspond to the input locations.
Captum uses specific methods for model interpretability, which e.g. use the gradient flow to visualize which activations were “important” for which output part, but that doesn’t seem to be your use case.

Thanks your reply.
I understand what you’re saying.
But I still can’t figure out what method I should use if I want to extract feature maps of ROI for irregular shapes.

But I remember using ROI Align in Mask R-CNN, which can extract the feature maps of the candidate box.

ROIAlign interpolates the proposals, which are overlaid on the feature maps, no?

As I said, you could calculate the receptive field, if you know all conv setups.
Interpolating an activation map to the same shape as the output will not create a 1 to 1 mapping.

Thanks your reply.
I understand your means. And I know what I should do next.
Thanks again.

That’s good to hear! Feel free to post your final approach, if you’ve found a good way to get these receptive fields or if you can reuse the code logic of ROIPooling etc. :slight_smile:

Hello, I also have the entire requirement. Have you found a solution?
Best wishes to you