RuntimeError no grad accumulator for a saved leaf error

Hey guys, these lines incur the error in jit mode. Do you have any idea?

 matching_scores = torch.matmul(bike_key_out.permute(0, 3, 4, 2, 1), taxi_key_out.permute(0, 3, 4, 1, 2))

bt_t_x = torch.matmul(matching_scores, taxi_x.permute(0, 3, 4, 2, 1)).permute(0, 4, 3, 1, 2)
Traceback (most recent call last):
  File "", line 329, in <module>
  File "", line 298, in main
    (bike_loss + taxi_loss).backward()
  File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/", line 118, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/autograd/", line 93, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: No grad accumulator for a saved leaf!

I don’t think those are the leaves the error talks about. Could you try to reduce your code to an reproducing example?

Best regards



It might be related to
We would need a small code sample to reproduce this to be sure.

I think I find out where the problem is. I unbind a tensor to a list of subtensors, and iteratively retrieve each subtensor to conduct further operation. If I substitue it by direct indexing on the original tensor, the error is gone.

Thanks for your help, I think I find out the reason of my issue.


Could you share what you changed to fix this?