That’s not entirely true since the detached tensor shares the storage with the original one. If you want to modify them independent from each other you have to clone them. Saying that, approaches 2 and 3 are indeed equivalent, but in approach 1 a would still share the same storage with b which is not the case in the other approaches due to the clone() operation
If what you want is exactly the old a = Variable(b.data, requires_grad=True) then the equivalent is a = b.detach().requires_grad_(). In both cases, a and b share the same storage and won’t use extra memory. The new version is better though because the autograd engine will properly detect if you do inplace operations on a while b's original value was needed for something else. So the gradients will be computed properly and if it can’t be done, it will raise an error.
If such error occurs, then you will need to add a clone to make sure that you won’t change b by side effect. The best way to do it I think is a = b.detach().clone().requires_grad_().