I try to use of eager execution having original nodes with parameters.
For training, I code with multiple iterations for training, so I did option of;
loss.backward(retain_graph = True)
Then I meet error of;
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-80ae8e772693> in <module>()
19 y = model()
20 loss = criterion(y, t)
---> 21 loss.backward(retain_graph = True)
22 optimizer.step()
1 frames
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
105 products. Defaults to ``False``.
106 """
--> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph)
108
109 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [28, 128]] is at version 25088; expected version 21504 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Does this mean of that path for back-propagation is disconnected (cannot do it)?