Hi, I’m working on building a model via weakly-supervised learning.
In order to achieve data generation in a weakly-supervised manner during training process, I’d like to make a GPU dedicated to data processing and pass them to other GPUs, while other GPUs learn using the processed data in parallel.
Is it possible? and could you give me some example or snippet?
I’m thinking of it is important for data synchronization among a data-dedicated GPU and other GPUs.
And I’m also curious it’s possible to select GPU’s device in data loader.