Detach().clone() vs data.clone()

I compared x.detach().clone() and x.data.clone() for some random x with requires_grad=True in terms of computational time and detach().clone() seems a bit faster. But why :thinking:?

I don’t see a difference using:

x = torch.randn(1024, 1024, requires_grad=True)
%timeit x.data.clone()
%timeit x.detach().clone()

but note that the .data attribute is deprecated and using .detach().clone() is the right approach.

1 Like