Out of memory error during evaluation but training works fine!

Awesome, thanks for taking the time to answer my questions.

Thanks for your comment, was helpful for me too! :slight_smile:

1 Like

It actually works. I’ve trained torchvision’s DensNet161 with 1 GPU. The model was trained well in training stage, but out of memory in eval. stage. Inference of model was performed without ‘torch.no_grad()’. Now there’s no out of memory. Thanks.

You’ve saved my homework and life. There’s full of tears of gratitude in my eyes. QAQ