I was wondering if it’s possible to parallelize this model evaluation within a for loop using a gpu here?
Should I be doing this in PyTorch or use multiprocessing module in Python? Here’s an example of one of the things I want to do:
model.eval()
with torch.no_grad():
for param in model.parameters():
for j in param.flatten():
for i in range(0,3):
j = torch.tensor(i)
correct = 0
for batch, label in tqdm(evalloader):
batch = batch.to(device)
label = label.to(device)
pred = model(batch)
print(pred)
correct += (torch.argmax(pred,dim=1)==label).sum().item()
print(correct)
break
accuracy = correct/len(evalloader.dataset)
In the above code I am basically trying to map out what the accuracy landscape looks like by doing a grid search. I manually changing the weights to 0,1,2 and evaluating the model for every single weight/bias of the neural network.
How can I use the GPU to evaluate the model in parallel a bunch of times?
PS: In fact, why doesn’t parallel evaluation of a model happen even for simple cases? Sometimes a testing dataset can be quite large.