Getting the autograd counter of a tensor in PyTorch

I am using PyTorch for training a network. I was going through the autograd documentation and here it is mentioned that for each tensor there is a counter that the autograd implements to track the “version” of any tensor. How can I get this counter for any tensor in the graph?

Reason why I need it.

I have encountered the autograd error

[torch.cuda.FloatTensor [x, y, z]], which is output 0 of torch::autograd::CopySlices, is at version 7; expected version 6 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

This is not new to me and I have been successful in handling it before. This time around I am not able to see why the tensor would be at version 7 instead of being at 6. To answer this, I would want to know the version at any given point in the run.


You could access it via tensor._version, but note that it’s an internal argument, so I would use it only for debugging and wouldn’t depend on it for other use cases.