PyTorch on Linux: can I use a desktop environment while training on GPU?

Hello everybody,
It’s my first post here; maybe it’s a stupid question, but I’ve always used PyTorch on cloud gpu instances. Recently I’m experiencing some issues with spot instances (they get preempted quite quickly) and, since I’d like to tinker a bit more with some models, I’m considering buying a physical GPU (not an extra-easy thing to do, but I think I can get a 3070 or 3080).

The big question is: can I use PyTorch while a run a desktop environment with a compositor on the same GPU (OS is Ubuntu Linux 20.04), or do I need to setup a separate, headless computer just with the GPU? Will the desktop environment take up a lot of available memory?

(of course I won’t play games or run 3D intensive software while I’m traing, but some basic desktop app will be there).

I’ve found some old threads where they recommend to leave the Nvidia GPU for CUDA “all by herself”, is that still the way to go?

I run ubuntu with a cuda gpu will training deep learning models and it doesn’t seem to hurt my performance a lot. I have a 2080 so a 3080 will probably be able to handle it even better. As long as your not doing a lot in the background the os shouldn’t take that much gpu power.