Screen becomes laggy when using GPU

This is not a PyTorch specific problem but I don’t know where to ask this question. I run PyTorch on Ubuntu because it is the one of the most popular platform for ML. However, one problem has been bothering me. When I train neural networks using GPU (which uses CUDA), my screen responds very slowly, such as scrolling up and down in a web browser. This is not the case if I use Ubuntu on Wayland instead of Xorg but I can’t find the option to use Wayland for Ubuntu 19,10. The cog at logging in is not there. I can’t be the only person having this problem right? I wonder how other people deal with this problem.

Unfortunately, that happens if your rendering uses the GPU and you do GPU compute at the same time.
What I used to do on my local machine is to get an old and cheap GPU that I use only for rendering the screen: attache your screen to it and hide it with CUDA_VISIBLE_DEVICES=1 when doing pytorch job.
That way the two will be independent.

Makes sense. But why Windows does not have this issue? If I run PyTorch on Windows, rendering is very smooth but I think PyTorch on Windows uses more memory?

It’s not a problem of memory but compute on the GPU. If the rendering does not use the GPU, then it will run smoothly. That is very OS dependent I’m afraid.

If you have that possibility, you can also use the motherboard’s direct output for rendering, which then uses more CPU, but doesn’t require having an additional GPU, if available space is limited for example… As always, it’s a tradeoff…

You can then block Xorg from using the GPU and free a bit of GPU memory this way (I think this was what I followed but can’t remember for sure: https://gist.github.com/alexlee-gk/76a409f62a53883971a18a11af93241b).