Is there a way to specify gpu duty cycle while training?

I find that when I’m training models but still working on my machine I get slower responsiveness on other programs that make use of the GPU. One good example might be Xournal++ which I use to take notes with a stylus.

I’m guessing the way the GPU is accessed by programs is in a time-shared fashion like with a CPU. But is there any way I can specifically tell PyTorch to only use the GPU on a reduced duty cycle, thereby giving more space for my side applications to use it?

Put a sleep in the forward method of my model? :stuck_out_tongue:

I don’t think you can reduce the utilization in any other way then to artificially reduce the workload e.g. through sleeps. :confused:

Yeah that makes sense. Thanks!