is there a pytorchic way (command/code) to know the memory that the model is taking on gpu instead of doing nvidia-smi?
Yes, you could use print(torch.cuda.memory_summary())
.
1 Like
is there a pytorchic way (command/code) to know the memory that the model is taking on gpu instead of doing nvidia-smi?
Yes, you could use print(torch.cuda.memory_summary())
.