I am using the awesome grid sampler to apply affine transformations to my images.

https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html#torch.nn.functional.grid_sample

For this, I generate a mesh grid over the [-1, 1] square which I then transform with the desired affine matrix. This is the grid I hand to the grid_smaple method. Scaling differences and interpolation are worked out internally by the sampler.

Now I would like to also transform my segmentation masks that for matter of precision are stored as polygons (list of x,y coordinates). Although I can create a binary mask and apply the grid_sample method to it, this means a loss of precision (polygons sometimes overlap, and I donâ€™t want to create unnecessary loops).

How should the polygons be formatted in order to conform with the grid_samplers internal resizing?

I tried the sequence of

a) resize the polygon such that they are relative to [-1, 1] on the source image dimensions

b) apply affine transformation matrix to polygon coordinates

c) resize the polygon to live in the target image dimensions.

This unfortunately was not successful.