Hi, I am trying to use Automatic Mixed Precision of Pytorch and I’m facing a problem just like the title. I wrote a small example below, can someone tell me what’s wrong with my code?
>>> import torch
>>> from torch.nn import functional as F
>>> from torch.cuda.amp import autocast
>>> torch.manual_seed(30)
>>> torch.cuda.manual_seed(30)
>>> x = torch.randn(3,4).cuda()
>>> y_true = torch.tensor([3.,3.,3.]).cuda()
>>> criterion = torch.nn.MSELoss()
>>> model = torch.nn.Linear(4,1, bias = False)
>>> model.cuda()
>>> with autocast(enabled=True):
>>> out = model(x)
>>> loss = criterion(out, y_true)
>>> print(out.type())
>>> print(loss.type())
>>> print(isinstance(out, torch.cuda.HalfTensor))
>>> print(isinstance(loss, torch.cuda.HalfTensor))
torch.cuda.HalfTensor
torch.cuda.FloatTensor
True
False
I expect the loss
should be in float16
or am I wrong in the definition of AMP?