Pytorch RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0

4 Likes

I also meet this problem, Have you solve this problem?

Not at all, but I realized that apparently I was not using the GPU but the CPU, I managed to send the data to the GPU with Tensor.to(‘cuda’)
I hope it helps you

it’s just like i have not enough memory to save .because when i add very very very much tensors to Tensors, it’s will happen this problem.

Which is the dimensionality of your data?

it’s seem to very large .I have 600000 samples to map the low dim vector(d=5650),so I need a Tensor to load the vectors [600000x5650] :rofl: , PyTorch maybe can’t load so much。
just on CPUtensor

Is your data sparse? Because maybe you could use csr_matrix object from scikit

oh thanks. I try it. now , i move all my data to numpy

http://man7.org/linux/man-pages/man3/posix_memalign.3.html

It is likely occurring from a data transfer from the gpu to cpu where your data is of size m and your available memory size is of n where n < m => as the documentation states there are two types of errors

   EINVAL The alignment argument was not a power of two, or was not a
          multiple of sizeof(void *).

   ENOMEM There was insufficient memory to fulfill the allocation
          request.

Considering my error occurred near the end of my training and am working with large datasets then for me it was likely an ENOMEM error.