Removing a model from a gpu


Trying to remove a model off of a GPU is proving difficult.

e.g. Link to ipython notebook

Any thoughts?


This is because we use a cached memory model. The memory is still considered free on PyTorch side, and when you allocate new tensors, these memory will be used. But they show up occupied for CUDA.

Soon we will be exposing a function that lets you get back all the GPU memory you want! :slight_smile: See details here:


Awesome, thanks!

Will have to check back for this. My team would be super interested in this.