How to know the memory allocated for a tensor on gpu?

Hi,

sys.getsizeof() will return the size of the python object. It will the same for all tensors as all tensors are a python object containing a tensor.
For each tensor, you have a method element_size() that will give you the size of one element in byte. And a function nelement() that returns the number of elements.
So the size of a tensor a in memory (cpu memory for a cpu tensor and gpu memory for a gpu tensor) is a.element_size() * a.nelement().

All objects are store in cpu memory. The only thing that can be using GPU memory are tensors (from all pytorch objects). So the gpu memory used by whatever object is the memory used by the tensors on the gpu that it contains.

60 Likes