Clone and detach in v0.4.0

I think the confusion is what “correctness checks” are.
If the user changes the values of the Tensor inplace and then use it. We don’t consider that to be an error. If you change some values while explicitly hiding it from the autograd with .detach(), we assume you have a good reason to do so.

What happens though is that the forward pass needs to save some Tensor values to be able to compute the backward pass. If you modify one of these saved Tensors before running the backward, then in that case, you will get an error. Because the original value was needed to compute the right gradients and it does not exist anymore (was modified inplace).
For example here, the output of exp() is required in the backward, so if we modify it inplace, you get an error:

>>> a = torch.rand(10, requires_grad=True)
>>> b = a.exp()
>>> b.mul_(2)
tensor([5.0635, 3.1801, 2.5123, 2.1725, 2.6194, 2.4245, 4.1136, 5.2920, 3.9636,
        3.0117], grad_fn=<MulBackward0>)
>>> b.sum().backward()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/albandes/workspace/pytorch_dev/torch/tensor.py", line 183, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/Users/albandes/workspace/pytorch_dev/torch/autograd/__init__.py", line 125, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10]], which is output 0 of ExpBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).