Is it possible to hijack the storage of data needed in backward pass

Is it possible to take of over the storage of the data used in the back propagation? For instance if I have little memory on the GPU and want to store it on CPU RAM and then move it back when it is needed in during back propagation.


If you’re using nightly or in the upcoming 1.10, you will be able to register such hooks directly: Autograd mechanics — PyTorch master documentation

And if you just want to move everything from GPU to CPU, we have a builtin here: Automatic differentiation package - torch.autograd — PyTorch master documentation

1 Like

Thank you it was exactly what I was looking for!