Memory is enough when i’m training the net, and i save the model.when i load the trained model and test, the code raise ‘out of memory’
train:batch_size=8. test:batch_size=1
i used eval() and ‘with torch.no_grad():’
In addition, I used 2 GPUs for training and testing in the code, but it only called one GPU’s memory when testing