Torch not checking for inplace operations when autocast

I have noticed that torch doesn’t throw an error that a variable was modified by an inplace operation when autocast is turned on. Why is that? And would my example with autocast work as expected?

Here is a simple example without autocast. This throws a Runtime error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace
operation: [torch.cuda.FloatTensor [1, 1]], which is output 0 of UnsqueezeBackward0, is at version 4;
expected version 3 instead. Hint: enable anomaly detection to find the operation that failed to compute its
gradient, with torch.autograd.set_detect_anomaly(True).
model = torch.nn.Linear(1, 1).cuda()
x = torch.empty(3, 1).cuda()
x[0] = model(torch.zeros(1).cuda())
x[1] = model(x[0])
x[2] = model(x[1])

loss = x.mean()
loss.backward()

Here is the example with autocast turned on:

model = torch.nn.Linear(1, 1).cuda()
with torch.cuda.amp.autocast():
    x = torch.empty(3, 1).cuda()
    x[0] = model(torch.zeros(1).cuda())
    x[1] = model(x[0])
    x[2] = model(x[1])

loss = x.mean()
loss.backward()

autocast could apply the transformations to FP16 tensors internally and would thus work on a clone of the data (in another data type) and would save you from the error.
Generally, if Autocast isn’t raising an error, your operations should work fine.