I find that when I’m training models but still working on my machine I get slower responsiveness on other programs that make use of the GPU. One good example might be Xournal++ which I use to take notes with a stylus.
I’m guessing the way the GPU is accessed by programs is in a time-shared fashion like with a CPU. But is there any way I can specifically tell PyTorch to only use the GPU on a reduced duty cycle, thereby giving more space for my side applications to use it?
Put a sleep in the forward method of my model?