How to delete PyTorch objects correctly from memory

Hi,

It is because the cuda backend uses a caching allocator. This means that the memory is freed but not returned to the device.

if after running del test you allocate more memory with test2 = torch.Tensor(1000,1000), you will see that the memory usage will stay exactly the same: it did not re-allocated memory but re-used the one that had been freed when you ran del test.

18 Likes