def f():
x = []
for i in range(3):
x.append(torch.tensor(i * 1.).clone().requires_grad_(True))
print(x)
Expected results:
[tensor(0., requires_grad=True), tensor(1., requires_grad=True), tensor(2., requires_grad=True)]
This function can work well in the terminal. However, in my project code (in the same env), x[0].requires_grad
is always False
(even if I try to set it to True
afterward) while others are okay. It is quite mysterious. Does anyone know the potential causes? Thanks.