Hello

I am creating a zero tensor before a loop (lets call it “test_tensor”) and at each time step I want to reset the elements to zero. Currently I am doing it as in the code snippet below:

auto test_tensor = torch::zeros_like({tensor_initializer});

auto test_tensor_2 = torch::zeros_like({tensor_initializer});

```
for (int time_step = 0; time_step<time_steps; time_step++) {
test_tensor_2 = test_tensor;
test_tensor = torch::zeros_like({tensor_initializer});
for (int i = 0; i<limit; i++) {
test_tensor[i] = function(*args);
}
```

If I try to set the elements of the “test_tensor” to zero using test_tensor.zero_() I obtain different results and my tests fail.

Questions:

- What could be the reason that these two ways work differently?
- Is it really beneficial in terms of memory to use test_tensor.zero_() or I could stick to the current way without an issue?