It’s my first post here; maybe it’s a stupid question, but I’ve always used PyTorch on cloud gpu instances. Recently I’m experiencing some issues with spot instances (they get preempted quite quickly) and, since I’d like to tinker a bit more with some models, I’m considering buying a physical GPU (not an extra-easy thing to do, but I think I can get a 3070 or 3080).
The big question is: can I use PyTorch while a run a desktop environment with a compositor on the same GPU (OS is Ubuntu Linux 20.04), or do I need to setup a separate, headless computer just with the GPU? Will the desktop environment take up a lot of available memory?
(of course I won’t play games or run 3D intensive software while I’m traing, but some basic desktop app will be there).
I’ve found some old threads where they recommend to leave the Nvidia GPU for CUDA “all by herself”, is that still the way to go?