Hi,
I’ve run into an issue when trying to use the torchvision.transforms.autoaugment.AutoAugment()
module while having set the default d_type to float16 :
The minimum code to reproduce my error is the following :
import torch
import torchvision.transforms as transforms
torch.set_default_dtype(torch.float16)
image = torch.randn(10,3,32,32).cuda()
transform = transforms.autoaugment.AutoAugment()
transform = transform.cuda()
result = transform(image)
Giving the error :
"round_vml_cpu" not implemented for 'Half'
My understanding is that it’s a bug, AutoAugment() does not cast the tensor created in the _augmentation_space
function to it’s device.
I’ve currently fixed it by replacing this function with one creating the Tensors on the GPU.
I’m posting this here to check if it is a bug / not yet implemented or if I’ve misunderstood how to use the module.
Thanks in advance.