Does FasterRCNN resize images internally before processing

In the FasterRCNN model, are the bounding box predictions made on a scaled image size? I’m asking because I would like to write some of my own evaluation code, and would like to know if I need to scale my ground truth-annotations accordingly.

On inspecting the Faster R-CNN model the applied transforms are:
(transform): GeneralizedRCNNTransform( Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) Resize(min_size=(800,), max_size=1333, mode='bilinear') )
I do not know if it scales the corresponding bounding boxes accordingly as well or if that should be handled manually. Also the docs for torchvision.transforms.Resize show no min_size parameter so I don’t know if 800 is the acceptable lower bound and anything above stays the same or what.