Hi!
Trying to remove a model off of a GPU is proving difficult.
Any thoughts?
Thanks!
Brian
Hi!
Trying to remove a model off of a GPU is proving difficult.
Any thoughts?
Thanks!
Brian
This is because we use a cached memory model. The memory is still considered free
on PyTorch side, and when you allocate new tensors, these memory will be used. But they show up occupied for CUDA.
Soon we will be exposing a function that lets you get back all the GPU memory you want! See details here: https://github.com/pytorch/pytorch/issues/1529#issuecomment-339649776
Awesome, thanks!
Will have to check back for this. My team would be super interested in this.