Double backward after pytorch version upgrade

I am getting the following error.

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

I am moving my code from to 0.4.1. With a few checks for version the code runs on both up to a few epochs, but eventually 0.4.1 fails with this error. I’ve seen a couple other posts where this can be related to custom Variables, of which I have many. Just posting the question to see if anyone has any thoughts on how to begin looking with the differences between versions before I go crazy looking at every single Variable in my system.


I think the best practice is to just remove all Variables and .data from your code.
Variables are not needed anymore and you can pass requires_grad=True when you create a tensor if needed.
The .data should be replaced by either .detach() if the goal is to prevent gradient from flowing back. Or move operations inside a with torch.no_grad(): block if the goal is to perform operations not tracked by the autograd engine.

The error you’re seeing is most likely due to a change in the Variable wrapping and .data semantics. Removing them with the right tool should fix it :slight_smile:

1 Like

Thanks for the info! In the process of doing that. Should I also remove Parameters or those are still ok?


No nn.Parameters remain the same. They are used by nn internals to detect parameters for .parameters()-like operations !

Awesome! Getting rid of all the Variables and replacing all the .datas with “with torch.no_grad():” makes it all run! Still gotta check its doing the same thing but no more errors. Thanks!