Training contour data (positions) instead of image pixel data (pixel values)

I am trying to train an extracted contour data to check if a certain shape has defects or not.
Contours are vectors of points but i am confused how to proceed . I am also curious if this approach is right .

Thank you.

If your contours have a static number of points you could try to fit your model to a regression task where the outputs would be these contour points.
I don’t know what the best approach would be for a variable number of contour points. Maybe a recurrent architecture could work in this case.

Thank you for your reply.
I will interpolate the contours to have same size and check results.