How to determine memory used by a given computational graph?

Let’s say I have some computational graph - is it possible to determine the amount of memory it needs? I know there is but that only allows you monitor the total memory used (and only if you use a GPU), so that is not very convenient if you have to monitor multiple graphs at once. I imagine it somehow must be possible to recurse down the graph to get the total size?

I created a small toy example:

import torch
import torch.nn as nn
net = torch.nn.Sequential(
    nn.Conv2d(1, 1, kernel_size=3, padding=1),
    nn.MaxPool2d(kernel_size=2, stride=1, padding=0),
    nn.Conv2d(1, 1, kernel_size=3, padding=1),
optim = torch.optim.SGD(params=net.parameters(), lr=0.001)
x = torch.randn((1, 1, 64, 64))
y = net(x).sum() #how much memory?
y.backward() #how much memory?


I’m afraid it’s not possible as some buffers are only saved on the cpp side and not accessible from python. You will need to check the global memory usage differences to see this :confused:

1 Like

I see, now it makes sense that I never found these buffers when I tried to explore the graph manually:) So I guess I’m gonna use hooks to read out the intermediate values. Thanks for your help!

Is that still the case?

If yes, would there be a way to persist the computational graph to disk? In that case, we could measure the size from there.

No you cannot easily save that.

Is that still the case?

I’m afraid it is still the case right now.
We are working on trying to provide a hacky way to access these attributes here that would allow you to check all the saved Tensors and their size though: Codegen python bindings to access attributes of grad_fn by soulitzer · Pull Request #52451 · pytorch/pytorch · GitHub