Loss error when using a list for ensemble of network

I just started to use PyTorch and I plan to build an ensemble of networks. I implemented it with a list. But I got the following error:
I’m confused by the error. The network has just 2 fc layers. Could you please help me explain that? Thanks in advance.

Traceback (most recent call last):
  File "test_list.py", line 107, in <module>
    list_of_loss[i].backward()
  File "/home/weiguo/anaconda3/envs/pomdp/lib/python3.6/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/weiguo/anaconda3/envs/pomdp/lib/python3.6/site-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.

Here’s my code:

num_model = 3

list_of_model = []
list_of_cri = []
list_of_opt = []

for _ in range(num_model):
    tem_model = TestNet(input_size, hidden_size, num_classes).to(device)
    tem_cri = nn.CrossEntropyLoss()
    list_of_model.append(tem_model)
    list_of_cri.append(tem_cri)

for i in range(num_model):
    list_of_opt.append(torch.optim.Adam(list_of_model[i].parameters(), lr=learning_rate))

list_of_loss = []


total_step = len(train_loader)
for epoch in range(num_epochs):
    for j, (images, labels) in enumerate(train_loader):
        for i in range(num_model):
            # print(i)
            images = images.reshape(-1, 28*28).to(device)
            labels = labels.to(device)

            output = list_of_model[i](images)
            loss = list_of_cri[i](output, labels)
            list_of_loss.append(loss)

            list_of_opt[i].zero_grad()
            list_of_loss[i].backward()
            # loss.backward()
            list_of_opt[i].step()

            if (j+1) % 100 == 0:
                _, predicted = torch.max(output.data, 1)
                correct = (predicted == labels).sum().item()
                if i == 0:
                    print("########")
                print(correct/labels.size(0))

However, if I comment list_of_loss.append(loss) and list_of_loss[i].backward(), but use loss.backward() straightforwardly in each loop, there’s no error.