Inspect memory usage


I’m working on making a inspector which examines each tensor, or nn.Module’s gpu/cpu memory resource consumption. (since nvidia-smi only shows total consumption)

Is there any built-in pytorch method to achieve this goal?


Hi Smth, running a very basic code on pytorch to learn about the memory usage. But the following error pops up. Any suggestions would be helpful

OSError Traceback (most recent call last)
in ()
1 cpuStats()
----> 2 memReport()

in memReport()
1 def memReport():
2 for obj in gc.get_objects():
----> 3 if torch.is_tensor(obj) or (hasattr(obj, ‘data’) and torch.is_tensor(
4 print(type(obj), obj.size())

/usr/lib/python3.6/ctypes/ in getattr(self, name)
416 if name[0] == ‘_’:
417 raise AttributeError(name)
–> 418 dll = self._dlltype(name)
419 setattr(self, name, dll)
420 return dll

/usr/lib/python3.6/ctypes/ in init(self, name, mode, handle, use_errno, use_last_error)
347 if handle is None:
–> 348 self._handle = _dlopen(self._name, mode)
349 else:
350 self._handle = handle

OSError: data: cannot open shared object file: No such file or directory

@aakashbhatia19 i’ve edited the original snippet for PyTorch 0.4 and above.