Why kornia.augmentation.RandomGaussianBlur() convert the dtype?

Somehow, kornia.augmentation.RandomGaussianBlur under with torch.cuda.amp.autocast(): converts the datatype from float to half. Is this expected? How can I disable the data type conversion? Thank you ahead.

The transformation will eventually call into this F.conv2d operation and is thus enabling the mixed-precision usage. If you want to disallow it, use a nested autocast context and set enabled=False for these operations.

Error Message:

Input type (torch.cuda.HalfTensor) and weight type (torch.FloatTensor) should be the same

Code (training):

model = GERL().cuda()
images = images.cuda()
with torch.cuda.amp.autocast():
    ouput = model(images)

Code (model)

class GERL(nn.Module):
    def __init__():
        self.aug = ImageSequential(ColorJitter(brightness=0.4, contrast=0.4, saturation=0.2, hue=0.1, p=0.8),
                                   RandomGaussianBlur(kernel_size=(23, 23), sigma=(0.1, 2.0), p=1.0),
        self.encoder = resnet50()

    def forward(x):
        with torch.no_grad():
            x = self.aug(x)
        x = self.encoder(x)
        return x

An error occurs at x = self.encoder(x) during model feedforward due to the data type difference between the model param (torch.FloatTensor) and the input type (torch.cuda.HalfTensor). Thank you !!

You might need to transform the inputs to float32 if they are already in float16 as described in the docs.

Also, your current code snippet does not show any nested autocast context which is disabling it as I’ve previously suggested, so unsure why you are not using this approach.

1 Like