Batch size > 1 for adversarial example

The tutorial notebook on Adverserial Training uses a batch size of 1. What code changes are needed if we want to train on a batch size of say 16. My understanding is, we only need to change the logic of
final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
# Now we have batch size > 1
final_pred.squeeze_()
indexes = final_pred == target
correct += torch.sum(indexes).item()

Is there something else needed. With this code change, I get values that are very similar to the case of batch_size=1 although not the same values. Any help would be appreciated.