RuntimeError no grad accumulator for a saved leaf error

Hey guys, these lines incur the error in jit mode. Do you have any idea?

 matching_scores = torch.matmul(bike_key_out.permute(0, 3, 4, 2, 1), taxi_key_out.permute(0, 3, 4, 1, 2))

bt_t_x = torch.matmul(matching_scores, taxi_x.permute(0, 3, 4, 2, 1)).permute(0, 4, 3, 1, 2)
Traceback (most recent call last):
  File "multi_expt.py", line 329, in <module>
    main()
  File "multi_expt.py", line 298, in main
    (bike_loss + taxi_loss).backward()
  File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/tensor.py", line 118, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: No grad accumulator for a saved leaf!

I don’t think those are the leaves the error talks about. Could you try to reduce your code to an reproducing example?

Best regards

Thomas

Hi,

It might be related to https://github.com/pytorch/pytorch/issues/19769
We would need a small code sample to reproduce this to be sure.

I think I find out where the problem is. I unbind a tensor to a list of subtensors, and iteratively retrieve each subtensor to conduct further operation. If I substitue it by direct indexing on the original tensor, the error is gone.

Thanks for your help, I think I find out the reason of my issue.

Hi,

Could you share what you changed to fix this?