I don’t know what this means.
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I don’t know what this means.
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Take a look at the Memory Management docs which explain how the caching memory allocator works.
The last section explains how the env variable can be used to prevent fragmentation in case your workload is suffering from it.
RuntimeError: CUDA out of memory. Tried to allocate 3.23 GiB (GPU 0; 16.00 GiB total capacity; 4.87 GiB already allocated; 6.53 GiB free; 7.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
can you help me?
Could you post a minimal, executable code snippet reproducing this issue, please?