Barely any GPU utilization when I run my script

I wrote a latent space model for networks, that I implemented in PyTorch, I get the data loaded to the GPU just fine, but when I run the code the GPU is only using between 0-2% of its total computing capacity. If anyone can spot where I go wrong, or where I can optimize, it would be highly appreciated.

network file and code is here:
https://drive.google.com/open?id=1b2dNYahbXdKT5qKYsVHTY3zwOOZOO1z5