RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
This is the model that I have defined :-
class SentimentAnalysis(nn.Module): def __init__(self, vocab_size, embedding_dim): super(SentimentAnalysis, self).__init__() self.fc = nn.Linear(embedding_dim, 2) self.sigmoid = nn.Sigmoid() def forward(self, x): out1 = self.fc(x) out2 = self.sigmoid(out2) out3 = out3.mean(dim=1) return out3
And this is my model training step :-
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') sentiment_analysis_model = SentimentAnalysis(len(train_vocabulary), 100) sentiment_analysis_model = sentiment_analysis_model.to(device) loss_fn = nn.BCELoss() optimizer = torch.optim.Adam(sentiment_analysis_model.parameters(), lr=1e-3) epochs = 5 losses =  for epoch in range(epochs): sentiment_analysis_model.train() for batch_idx, batch in enumerate(train_loader): input = batch.to(device) target = batch.to(device) optimizer.zero_grad() out = sentiment_analysis_model(input) target1 =  for i in target: if i == 0: target1.append(torch.tensor([1.0, 0.0], requires_grad=True)) else: target1.append(torch.tensor([0.0, 1.0], requires_grad=True)) target1 = torch.stack(target1) loss = loss_fn(out, target1) loss.backward() optimizer.step() losses.append(loss.item())
I’ve been stuck at this for more than a couple of days now, but unable to get any solution here. Any help would be so very much appreciated!