`tolist` vs `numpy` with `requires_grad=True`

What is the rationale for allowing the t.tolist() method to be called on a tensor t with requires_grad=True, but not t.numpy()? I’d have thought it would make sense to allow both or neither, but it doesn’t make sense to me why one would be allowed and not the other. Example:

import torch

t = torch.arange(5, dtype=torch.float32, requires_grad=True)
print(t, type(t), type(t[0]))
# >>> tensor([0., 1., 2., 3., 4.], requires_grad=True) <class 'torch.Tensor'> <class 'torch.Tensor'>
print(t.tolist(), type(t.tolist()), type(t.tolist()[0]))
# >>> [0.0, 1.0, 2.0, 3.0, 4.0] <class 'list'> <class 'float'>
print(t.numpy())
# >>> RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

Hi Jake!

I speculate that it’s because t.numpy() shares memory with t, but
that autograd can’t track changes to t that occur through t.numpy().
My thinking would be that this is sufficiently error-prone that it’s not
supported.

Here we see how one can modify t through its shared memory:

>>> import torch
>>> torch.__version__
'2.4.0'
>>> t = torch.arange (5.)
>>> tn = t.numpy()
>>> tn[1] = 99
>>> t
tensor([ 0., 99.,  2.,  3.,  4.])

Best.

K. Frank

Makes sense, thank you @KFrank !