I’m using PyTorch for auto-grad without any neural network involved.
I’m unable to provide a reproducible code otherwise it will be too many lines. From a high level, it looks like the following:
class MarkovModel(nn.module):
def __init__(self):
super(MarkovModel, self).__init__()
self.potentials = nn.ParameterDict()
def set_potantials(self, potentials):
...
class BeliefProb():
def __init__(self):
self.model = MarkovModel()
self.belifs = ...
def update_beliefs(self):
...
self.beliefs = ...
...
def inference(self):
for i in range(SOME_CONSTANT):
self.update_beliefs()
iters = ANOTHER_CONSTANT
model = MarkovModel()
potentials = ...
model.set_potentials(potentials)
bp = BeliefProb(model)
optimizer = Adam(bp.model.parameters(), lr=0.01)
target = ...
for i in range(iters):
optimizer.zero_grad()
bp.inference()
loss = torch.abs(target - inference.beliefs).sum()
loss.backward()
optimizer.step()
If I set iters = 1
, then it works fine. But if iters>1
, I got the complains that
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
It seems that calling bp.inference()
can build the computational graph once, but when I called bp.inference()
again, the computational graph will not be built for the second time.