Appending pytorch tensors to a list

I am trying to get a history of values of x with ‘for’ statement.
For python floats, I get the expected list.
The result is surprising with tensors: all entries of the list are the same!
Please let me know why this happens with tensors. Thanks.

— code —

import torch

# when x is a python float

x_hist = []
x = 1.1

for i in range(3):
    x -= 0.1

# out [1.0, 0.9, 0.8]

# when x is a tensor

x_hist = []
x = torch.tensor(1.1)

for i in range(3):
    x -= 0.1

# out [tensor(0.8000), tensor(0.8000), tensor(0.8000)]

In the case of tensor, x is a reference to the tensor. When we append it to the list, the reference is appended (not the value).
Hence, when you modify x later in-place (x -= 0.1) and print the elements in the list later, they all print the same (latest) value of x.

OK, I see. Thanks.
Then the next question follows:.
what should I do to get the list I want, i.e., [1.0, 0.9, 0.8]?

It depends on your usecase.
If you do not want gradient, you can use .item() or you can use .clone().

1 Like

Thanks, again!
I tried both .item() and .clone().

When I use x.item(), I get
[1.0, 0.8999999761581421, 0.7999999523162842]
and when I use .clone(), I get
[tensor(1.), tensor(0.9000), tensor(0.8000)]

Is it interesting?

When the tensor is printed, it prints the rounded values up to four digits.

Great answers! Thanks.