GPU memory - free and consumed by my Variables?

Newbie question but I was not able to google an answer.
When running Keras model on GPU (with TensorFlow backend) a message is displayed automatically showing total and free amount of GPU memory.
Can I get the same functionality in PyTorch? How can I display the total and free GPU memory with PyTorch?
What’s the simplest way to display total GPU memory consumed by all Variables in my program?

on shell:
watch -n 1 nvidia-smi

should serve u well.

Thanx a lot for the tip. That’s exactly what I was looking for.

Other options are:

! watch -n 0.1 'ps f -o user,pgrp,pid,pcpu,pmem,start,time,command -p `lsof -n -w -t /dev/nvidia*`'

sudo apt-get install dstat #install dstat
sudo pip install nvidia-ml-py #install Python NVIDIA Management Library
sudo mv /usr/share/dstat/ #move file to the plugins directory of dstat