Confused on an example on pytorch official documentation

Hi there, please check the torch.asarray β€” PyTorch 2.0 documentation example:

>>> a = torch.tensor([1, 2, 3], requires_grad=True).float()
>>> b = a + 2
>>> b
tensor([1., 2., 3.], grad_fn=<AddBackward0>)

I tried this script and it shows:

>>> a = torch.tensor([1, 2, 3], requires_grad=True).float()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
>>> a = torch.tensor([1.0, 2, 3], requires_grad=True).float()
>>> a
tensor([1., 2., 3.], requires_grad=True)
>>> b = a+ 2
>>> b
tensor([3., 4., 5.], grad_fn=<AddBackward0>)
>>> 

Do I miss something? Looking forward to your advice. Thanks.

I also don’t understand how a + 2 should return the same values, but also the code will already fail before in the creation of a:

    a = torch.tensor([1, 2, 3], requires_grad=True).float()

RuntimeError: Only Tensors of floating point and complex dtype can require gradients

CC @ysiraichi as the author of this PR.
Could you check if the example is missing a few things?

1 Like

Ah, those examples are definitely wrong!
I have created a PR fixing that. Thank you for spotting that.

2 Likes