Using torchvision.transofrms with floating point

do i understand correctly that since torchvision.transforms act on PIL images which, as far as i tried, can’t be converted to float and still retain all 3 channels (which is pretty weird so i hope somebody can correct me on this), one cannot apply torchvision.transforms in floating point?

the reason i’m asking this is because even though i can imagine doing transforms in int8 format can be faster… if i’m looking to preserve details then doing transforms such as rotation and shear in int8 can result over time in quantization errors.

do i understand correctly that if i want to have augmentations in float i must use open cv (or, if i remember correctly, FastAI)?