Cuda Reserve Memory

I don’t know what this means.

If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Take a look at the Memory Management docs which explain how the caching memory allocator works.
The last section explains how the env variable can be used to prevent fragmentation in case your workload is suffering from it.