Is there any sort of GPU scheduler / dispatcher?

I know I can select which cuda device to run my code on (cuda:0 cuda:1, etc). But is it possible I can do something so that pytorch looks at what is the least used GPU and runs it there? Sometimes I like to have multiple models running at a time and I want to avoid manually checking nvidia-smi and editing my code to update the device.