Not Enough Memory 237GB

I try to convolve a tensor of size 1*1*101*101*101 with a kernel of size 99*25*25 and got the following error:

RuntimeError: Torch: not enough memory: you tried to allocate 237GB. Buy new RAM

it said that the error occurred at the line F.conv3d

this is my code:

with torch.no_grad():
            test_output=F.conv3d(test_deconv, psf, padding=(49, 12, 12))

I run Pytorch 0.4.0 on Windows 10 with a RAM of 16GB and Python version is 3.6.3 anaconda


Convolutions are performed by doing matrix multiplication of matrices representing the input and kernel (for speed reasons).
With the parameters you have, such a matrix would require 237GB to be stored.
So you cannot do that.

1 Like

But the model can be trained on the server with Tesla K80 Gpu with Memory of 10GB. How does that work then?

Because CPU and GPU use different algorithms. Also GPU will use different algorithm with and without cudnn, with cudnn benchmarking enabled or not…