Different batchsize for training and validation

Hi, could I set the different batchsize for training and validation? The gpu ram is not enough.
For example,

  • Training: batchsize 128
  • Validation: batchsize 1

I think the batchsize of validation will not affect on the validation loss and accurancy, right?

1 Like

Yes, you can use different batch sizes and the batch size during evaluation (after calling model.eval()) will not affect the validation results.

Are you using larger inputs during the validation or why do you have to reduce the batch size by 128x?

2 Likes

Now I am using batch size 128 for both training and validation but the gpu ram (2080Ti 11G) is full.
By the way, my task is to combine image model and language model to classify. I am not sure whether my model is too large or not.

There are 443,757 question for training 214,354 for validation. I think the batch size 128 is a little bit small. The training time is almost 2.5 hours per epoch. It really drives me crazy…

You can reduce the memory usage during validation by wrapping the validation loop in a with torch.no_grad() block, which will make sure to not store the intermediate activations, which would be needed to calculate the gradients. If you aren’t using it already, you might be able to increase the batch size during validation further and speed up this loop.

1 Like

I have already used torch.no_grad() to reduce the memory usage

Maybe this is relevant: The batch size can affect inference results | OpenReview