Could you tell me why the error occurs here?

import torch

a = torch.rand(3,3, requires_grad=True)

L = []

for i in range(10):

```
b = a * a
d = b * b
e = d.sum()
L.append(e)
sum(L).backward()
```

Could you tell me why the error occurs here?

import torch

a = torch.rand(3,3, requires_grad=True)

L = []

for i in range(10):

```
b = a * a
d = b * b
e = d.sum()
L.append(e)
sum(L).backward()
```

Your code is unfortunately not formatted correctly, but I assume the `backward()`

operation is performed inside the loop. The first `backward`

pass would free the intermediate activations from the forward pass, so that you wouldn’t be able to call `backward`

a second time (if this is needed, use `retain_graph=True`

, but it’s usually not the case). Since you are storing the `e`

tensor in the `L`

`list`

and calculating the `sum`

afterwards, the next `backward`

pass would then try to backpropagate through *both* `e`

tensors (the one from the current iteration [`e1`

] as well as the one from the previous iteration [`e0`

]). However, since the computation graph from `e0`

is already freed the error is raised.

I don’t know why you wanna do this:

```
import torch
a = torch.rand(3,3, requires_grad=True)
L = []
for i in range(10):
b = a * a
d = b * b
e = d.sum()
L.append(e)
sum(L).backward()
```

the more normal way is to have `sum(L).backward()`

outside of loop.(and i think it really is the case)

But if your intention is to compute grad for what you asked, then you can go around using `retain_graph = True`

with change in how you backward.

```
import torch
a = torch.rand(3,3, requires_grad=True)
L = []
for i in range(10):
b = a * a
d = b * b
e = d.sum()
e.backward(torch.tensor(10-i)) # if you really want to compute
# grad multiple times,
# i.e. backward for 10
# times on first item of the list
L.append(e.item())
sum(L)
# do some thing with a.grad and then
print(a.grad)
a.grad.fill_(0)
```

This way is more memory efficient.