Increase dataset size using Data Augmentation

It probably provides regularisation the same as any other augmentation?

You seem to be suggesting something along the lines of

for i in epoch:
    if i % 2 == 0:
        transforms = first set of transforms
    else:
        transforms = second set of transforms
    # make dataloader with transform
    # train

which is functionally identical to just having half as many epochs and sequentially training it on the dataset with each set of transforms. It wouldn’t be as nicely shuffled, but if you’re using the same source images anyway it probably wouldn’t matter.