@smth
So, it threw this error at the end of training the first epoch, when it was just about to begin testing for accuracy.
The batch_size was 20 for both the trainloader and testloader, so I am confused, since it is already fairly low. Should I reduce it further to, say, 10?
I am doing two-class classification with resnet34 and suddenly the same error of “out of memory at /pytorch/torch/lib/THC/generic/THCStorage.cu:58” is occuring. previously code was running with any batch size. I reduced the batch size to 2 but the error remains. Anyone knows how to rectify this error?
Thanks!