another PyTorch newbie here trying to understand their computational graph and autograd.
I’m learning the following model on potential energy and corresponding force.
model = nn.Sequential( nn.Linear(1, 32), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 1) ) optimizer = torch.optim.Adam(model.parameters()) loss = nn.MSELoss()
# generate data r = torch.linspace(0.95, 3, 50, requires_grad=True).view(-1, 1) E = 1 / r F = -grad(E.sum(), r) inputs = r for epoch in range(10**3): E_pred = model.forward(inputs) F_pred = -grad(E_pred.sum(), r, create_graph=True, retain_graph=True) optimizer.zero_grad() error = loss(E_pred, E.data) + loss(F_pred, F.data) error.backward() optimizer.step()
However, if I change the
inputs = r to
inputs = 1*r, the training loop breaks and gives the following error
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Could you please explain why this happens?