Autocast doesn't cast all FloatTensor to HalfTensor

Hi, I am trying to use Automatic Mixed Precision of Pytorch and I’m facing a problem just like the title. I wrote a small example below, can someone tell me what’s wrong with my code?

>>> import torch
>>> from torch.nn import functional as F
>>> from torch.cuda.amp import autocast
>>> torch.manual_seed(30)
>>> torch.cuda.manual_seed(30)
>>> x = torch.randn(3,4).cuda()
>>> y_true = torch.tensor([3.,3.,3.]).cuda()
>>> criterion = torch.nn.MSELoss()
>>> model = torch.nn.Linear(4,1, bias = False)
>>> model.cuda()
>>> with autocast(enabled=True):
>>>     out = model(x)
>>>     loss = criterion(out, y_true)
>>> print(out.type())
>>> print(loss.type())
>>> print(isinstance(out, torch.cuda.HalfTensor))
>>> print(isinstance(loss, torch.cuda.HalfTensor))

I expect the loss should be in float16 or am I wrong in the definition of AMP?

torch.cuda.amp.autocast will cast to float16 where possible and will cast or keep the precision in float32 where it’s necessary as described here.
The float32 list contains mse_loss so the output is expected.

Thank you for your answer,
My problem is the RuntimeError: expected scalar type Half but found Float at the line scaler.scale(loss).backward(). I want to print the internal params to check which are HalfTensor by:

>>> def check_half(model):
>>>     for name, param in model.named_parameters():
>>>         if isinstance(param, torch.cuda.HalfTensor):
>>>             print(name)
>>> with autocast(enabled=True):
>>>     out = model(x)
>>>     check_half(model)

However, it doesn’t work, do you have any suggestions?

Could you post a minimal, executable code snippet reproducing the issue so that I could take a look, please?