Why eval takes more memory?

Memory is enough when i’m training the net, and i save the model.when i load the trained model and test, the code raise ‘out of memory’
train:batch_size=8. test:batch_size=1
i used eval() and ‘with torch.no_grad():’

In addition, I used 2 GPUs for training and testing in the code, but it only called one GPU’s memory when testing

Does your validation loop yield the OOM directly during the first iteration or after a few iterations?
Could you post a (small) code snippet, which shows how you are using the training and validation loop?

i have solved it,anyway,tks!!