Different loss and performance when training an autoencoder on horizontally flipped data

Hello, I’m currently training an autoencoder on a dataset consisting of images of a circular object. This object should possess mirror symmetries and rotational symmetries. In order to debug my issues with augmentations, I tried training on data that had been horizontally flipped. Despite the object being a circular object, the loss takes a distinctly different trajectory during training (this is repeatable) and results in different performance. Any ideas what might be causing this or how I might go about diagnosing this? I feel understanding this is a prerequisite to using augmentations. Thank you!

Would it be possible to show some example images/a script to look at and reproduce?