Hi guys, I’m achieving sentence-BERT
, when I want to train it and the loss backward cannot work.
The wrong info:
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
Following is my train pipeline:
output1, output2 = s_bert(tokenized_input1, tokenized_input2)
labels = torch.tensor([1, 0, 1])
criterion = ContrastiveLoss()
optimizer = Adam(s_bert.parameters(), lr=1e-3)
s_bert.train()
for epoch in range(10):
loss = criterion(output1, output2, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(loss)
I guess the reason for the error is that the loss is computed with output1
and output2
, which are from the same model. I should make loss update once, but I don’t know how to make it.
Looking forward your helps, the Best!!!