I try to use torch.cuda API to retrieve details of GPUs (e.g. memory), but i cannot find related API. How can I get the GPU information using pytorch? thanks
1 Like
you can open teminal and type nvidia-smi
@SherlockLiao: I guess the question was about accessing GPU usage inside python code. If that’s the case, you can try nvidia-ml-py or simply run subprocess
and parse the output as shown here. I am not aware of any torch.cuda API for doing this.
1 Like
@SelvamArul, thanks, that is the case. In nvidia-smi, the gpu id is different from deviceQuery, and i need know which gpu is used in pytorch by checking the memory.