Traceback (most recent call last):
File "multi_expt.py", line 329, in <module>
main()
File "multi_expt.py", line 298, in main
(bike_loss + taxi_loss).backward()
File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: No grad accumulator for a saved leaf!
I think I find out where the problem is. I unbind a tensor to a list of subtensors, and iteratively retrieve each subtensor to conduct further operation. If I substitue it by direct indexing on the original tensor, the error is gone.