My polynomial regression model is supposed to output the coordinates of vertices of polygons from images. Since there can be multiple polygons on the same image and the number of vertices in a polygon can vary, I’ve got 2 output nodes, to output the x and y coordinates.
I’ve padded the labels to max _size of the labels, to create uniform size labels.

The model outputs for x and y, but they’re of different shape than the label and requires padding for loss calculation.
Will this (after the model is fully trained) limit output size to max_size?

I’m working with a CNN-polynomial regression model.

To clarify, you are feeding the model an image, and want the coordinates of any vertices. Correct?

Suppose there are 20 vertices in one image, 15 in the second and 2 in the third. In that case, I would make the output size of the model 30% greater than the largest size needed to display the coordinates for all vertices. In the example above, this would be (20*2)*1.3 = 52 outputs. And then make sure the label coordinates are ordered according to their x, y position, and fill in any not present with -1(Or some value never present in the scaled labels, otherwise).

Alternatively, you could make the output of the image size width and height and treat it as an object detection, such that you have 0s where there are no vertices and 1s where there are.