Pytorch-GPU% utilization 'CUDA'


I’m using NVIDIA gpu device and i’m using pytoch-GPU. Is there a way to check the GPU ‘CUDA’ utilization while running a code.
I tried torch.cuda.utilization(device=‘cuda’) but this generated the below error
pynvml.nvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found.

torch.cuda.utilization is documented as using nvidia-smi underneath, could you check if running nvidia-smi in a separate terminal works instead?

I’m using jetson AGX orin and it doesn’t support nvidia-smi