How to know the memory that is allocated from your model on gpu?

is there a pytorchic way (command/code) to know the memory that the model is taking on gpu instead of doing nvidia-smi?

Yes, you could use print(torch.cuda.memory_summary()).

1 Like