Train the same model twice in on loop

I want to train the same model twice in one loop.Why it still says the computation graph has been freed. What I can do to finish the logic like below. Thanks,everyone.

import torch
import torch.nn as nn

model  = nn.Linear(1,1)
optimizer = torch.optim.Adam(model.parameters(),lr=0.1,betas=(0.5, 0.999))

for i in range(200):
    optimizer.zero_grad()
    input = torch.tensor([1])
    output = model(input)
    loss_1 = 100 - output
    loss_1.backward()
    optimizer.step()

    optimizer.zero_grad()
    input = torch.tensor([2])
    output = model(input)
    loss_2 = 200 - output
    loss_2.backward()
    optimizer.step()

Your code doesn’t raise the mentioned error in my setup, but fails with a dtype mismatch since you are using LongTensors as the input.
Use input = torch.tensor([1.]) and it should work.