Accessing the computation graph

Is there any way to access the computation graph created by Pytorch during forward pass and shift that to CPU from GPU? I want to free the GPU so that I can load the next part of the model on it and continue the training.

Hi @Rishabh_Dahale,

I don’t think it is possible.

But if your problem is that the computation graph is too big to fit in your GPU memory, gradient checkpointing might be able to help you here.

Please take a look at the official doc and at this tutorial. Even though it is using the Variable API, the checkpointing is still relevant.

1 Like