Why load test data in mini-batches?

This may be a dumb question. The answer is probably that even test data sets are too large to fit in memory. But I would still like to know why we use a data loader for the test set.

As far as I understand it, the test set is used for calculating metrics and perhaps error analysis. There are no operations that need to be carried out on mini-batches. Can someone please confirm the reason?

Example (MNIST):

    test_loader = torch.utils.data.DataLoader(
        datasets.MNIST('../data', train=False, transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=args.test_batch_size, shuffle=True, **kwargs)

I also have a follow-up question to askā€¦

memory in many cases for me is the primary reason. you are basically just running a bunch of forward passes through your network. if you have batchnorm layers in your network it is wise to call model.eval() before testing as this will essentially prevent that layer from messing things up if you change the batch size.

see: