Below is the model definition and the training loop.
I referred a lot of solutions and tried retain_graph = True
, no luck.
Model:
class modelClassifier (nn.Module):
def __init__(self, bottleneck_dim):
super(modelClassifier, self).__init__()
self.bottleneck_dim = bottleneck_dim
self.fc01 = nn.Linear(self.bottleneck_dim, 200)
self.fc02 = nn.Linear(200,10)
def forward(self,x):
x = F.relu(self.fc01(x))
x = F.relu(self.fc02(x))
return x
Model initailization:
model_clf = modelClassifier(bottleneck_dim=60)
optimizer_clf = torch.optim.SGD(model_clf.parameters(), lr = 0.001)
loss_function_clf = nn.CrossEntropyLoss()
Training loop:
EPOCHS = 10
for epoch in range(EPOCHS):
for batch_num, (data, target) in enumerate(dataloader_clf):
model_clf.train()
outputs = model_clf(data)
loss_clf = loss_function_clf(outputs, target)
optimizer_clf.zero_grad()
loss_clf.backward(retain_graph = True)
optimizer_clf.step()
Error Traceback:
RuntimeError Traceback (most recent call last)
<ipython-input-149-07e5978e3c22> in <module>
10
11 optimizer_clf.zero_grad()
---> 12 loss_clf.backward(retain_graph = True)
13
14 optimizer_clf.step()
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.