How to different image size prediction fix?


I trained mask rcnn resnet50 to my model. During the training, the input images are 512.1024 in size.
When I give an input picture of this size, the results are very good. However, the input image 4096x8192 cannot find any guesses.

What can I do.

You could reshape the new images to the trained shape. These larger images would be new to the model and it might thus fail, as e.g. the feature extraction might not be able to find any useful features if the resolution changes “a lot”. Note that the decrease in performance might not be that drastic if you slightly change the resolution.

1 Like

Thank you for answer.

Before start traning i have set anchor sizes and aspect ratio for example:

(I didn’t give it according to a rule, I just wanted it to learn in different scales.)

 anchor_sizes : ((32,), (64,), (128,), (256,), (512,))
 aspect_ratio : ((0.15, 0.2, 0.25, 0.5, 1.0, 2.0, 2.5, 3.0, 4.0, 5.0 ,6.0, 8.0),)

I thought it might work but it didn’t yield any results

Besides, These anchor settings how i can make according to my datasets.

My Dataset information,

images 512x1024 and generally labeled objects large 30x120, 45x150 etc

I don’t want to give resize images or tile images while prediction because i am missing object or can’t merging detected objects in tiles image