Automatic loading/unloading models from hdd to gpu

I am implementing a project that uses e.g. 100 models, and it trains random samples of the models in each iteration. The problem is my GPU is not sufficient, so I was wondering if it is possible to have a pointer to model in HDD, so when it is time for training, it retrieve them and after the training they save on the disk. Therefore, we would have only limited number of active models in GPU, and the rest would be in hibernation, and wait for the call.
I would be happy if pytorch would have this option, and it wouldn’t be necessary to implement this part myself.
In short: I want to save and load a list of models, without going into making a dictionary and saving the files separately.