Hi all !
I’m running across an issue for a test scenario I need to run. I have a dozen of models built with the exact same pattern (WideResNet 28-10). Each of those model have slight differences due to training divergences.
I however would need to qualify each of those model on the entire test set (4 minibatches, CIFAR 10). Those models may eventually get back into a training iteration following the test.
The easy way to go would be to do a repeated iteration in the test set on each of those models, i.e. roughly :
for i, (input, target) in enumerate(test_loader): target, input = target.to(device), input.to(device) for model in models: output = model(input) loss = criterion(output, target) acc1 = accuracy(output, target) # metric management
Which seems a bit inefficient/slow (2 min per minibatch, 4 minibatches, a thousand models … ). Is there any way to do so in a clean way which would exploit the embarassingly parallel nature of this task ?
At first I thought to concat the models (in the end its just sets of layers that could operate in parallel ?) together but I have no clue on how to achieve that in Pytorch/don’t even know if its doable. Multiprocessing also appeared as a possible choice but would it be any efficient ? Any suggestion is welcome