Loss cannot backward

Hi guys, I’m achieving sentence-BERT, when I want to train it and the loss backward cannot work.

The wrong info:

RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.

Following is my train pipeline:

output1, output2 = s_bert(tokenized_input1, tokenized_input2)
labels = torch.tensor([1, 0, 1])

criterion = ContrastiveLoss()
optimizer = Adam(s_bert.parameters(), lr=1e-3)
s_bert.train()

for epoch in range(10):
    loss = criterion(output1, output2, labels)

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    print(loss)

I guess the reason for the error is that the loss is computed with output1 and output2, which are from the same model. I should make loss update once, but I don’t know how to make it.

Looking forward your helps, the Best!!!

You are making the forward pass once, and using the outputs of this forward pass to compute the loss again and again. Is this what you wanted to do?

Actually I need the model to receive two source data at once, and calculate the distance between these pairs. And the distance is the component of my loss function. But during the backward, I hope model’s parameter being updated by the loss only once.

This does not answer my question.

OK, I made a big mistake. The model forward process is out the for-loop, so the loss always be calculated by the same output1 and output2.

When I add the output1, output2 = s_bert(tokenized_input1, tokenized_input2) into the loop, it does work.

1 Like