Reserving gpu memory?

Hi,

First of all, this is more of a cluster sharing problem from my point of view than a real need.

Anyway, your solution to allocate a tensor then delete it will work because the caching allocator will keep the memory around for the next allocations. You don’t need to replace it, you can only do del x just after creating it.
Be aware that this can have some side effect of possibly increasing the overall memory usage of your program and that as soon as your program will be close to run out of memory, the allocator will free all unused memory and your “memory pool” will be gone.

1 Like