I am using PyTorch for object detection and refining an existing model (transfer learning) as described in the following link - TorchVision Object Detection Finetuning Tutorial — PyTorch Tutorials 1.8.0 documentation

While different transformations are used for image augmentation (horizontal flip), the tutorial doesn’t mention anything on transforming the bounding box/annotation to ensure they are in line with the transformed image. Am I missing something basic here?

The used transformations in this section are used from references/detection/ as mentioned in the tutorial, and will apply the transformations to the image and target.