I’m trying to apply torchvision.transforms.RandAugment to some images, however it seems to be inconsistent in its results (I know the transforms will be random so it’s a different type of inconsistency).

However, I’ve found it to both work with and error with inputs of torch.float32.

My current transforms pipeline looks like this:

train_transforms = transforms.Compose([
transforms.Resize((64, 64)),
transforms.ToTensor(), # it seems the order in where this is placed effects whether the transform works or not
transforms.RandAugment()
])

Using the above seems to generate inconsistent results.

Hi @mrdbourke remember that ToTensor() normalize the image between 0 and 1 but RandAugment can be applit to a Tensor (that’s what ToTensor() returns) or can be applied to a PIL Image (that’s what you have after Resize((64, 64)).