Memory consumption of computation graph

Is there a way to determine how much memory the computational graph of
a tensor occupies in memory? This is important to know sometimes for memory-bottlenecked networks as one can move the network to a less memory hungry model (if such model exists) but with comparable performance (for example GRU instead of LSTM).

For GPU memory, you can use torch.cuda.max_memory_allocated and torch.cuda.memory_allocated.

Best regards