Hey,
I have a classic use-case of white-box adversarial attacks generation, where for each input image from the test set (using a dataloader and a pre-trained model) I generate an adversarial example using a standard training process with Adam optimizer.
I have a machine with multiple GPUs, so I want to parallelize the optimization on different images, since they’re independent.
Essentially, we can look at it as completely different processes, except they use the same trained model (could be different copies of it if necessary) and there has to be a “main” process that in every iteration loads <num_gpus> samples from the dataloader, and sends each one of them to a different GPU.
what’s the cleanest way to implement this parallelization?
Thanks!