In Torch, we use cutorch.getMemoryUsage(i) to obtain the memory usage of the i-th GPU.
Is there a similar function in Pytorch?
In Torch, we use cutorch.getMemoryUsage(i) to obtain the memory usage of the i-th GPU.
Is there a similar function in Pytorch?
You need that for your script? If so, I don’t know how. Otherwise, you can run nvidia-smi
in the terminal to check that
Yes, I try to use it in script.
The goal is to automatically find a GPU with enough memory left.
import torch.cuda as cutorch
for i in range(cutorch.device_count()):
if cutorch.getMemoryUsage(i) > MEM:
opts.gpuID = i
break
In case anyone else stumbles across this thread, I wrote a script to query nvidia-smi that might be helpful.
import subprocess
def get_gpu_memory_map():
"""Get the current gpu usage.
Returns
-------
usage: dict
Keys are device ids as integers.
Values are memory usage as integers in MB.
"""
result = subprocess.check_output(
[
'nvidia-smi', '--query-gpu=memory.used',
'--format=csv,nounits,noheader'
], encoding='utf-8')
# Convert lines into a dictionary
gpu_memory = [int(x) for x in result.strip().split('\n')]
gpu_memory_map = dict(zip(range(len(gpu_memory)), gpu_memory))
return gpu_memory_map
GPUtil is also a library that achieve the same goal.
But I’m wondering if PyTorch has some functions for this purpose.
My goal is to measure the exact memory usage of my model, and it varies as the input size varies, so I’m wondering if PyTorch could have such a function so that I can have more accurate GPU memory usage estimation.
Hi,
You can find in the doc these functions here and below. You can get the current and max memory allocated on the gpu and the current max memory used to actually store tensors.
Hi, thank you!
I found these functions later, but find that they did not match the nvidia-smi output. And what’s the difference between max_memory_allocated/cached V.S. memory_allocated/cached?
I raised these questions in Memory_cached and memory_allocated does not nvidia-smi result
Just FYI, I was also looking for the total GPU memory, which can be found with:
torch.cuda.get_device_properties(device).total_memory
3 highly voted answers show different results. Why? Or am I missing something?
The different answers explain what the use case of the code snippet is, e.g. printing the information of nvidia-smi
inside the script, checking the current and max. allocated memory, and printing the total memory of a specific device, so you can chose whatever fits your use case of “memory usage”.