What is the disadvantage of using pin_memory?

(Yaozong Gao) #1

From the official pytorch document (http://pytorch.org/docs/notes/cuda.html#use-pinned-memory-buffers), it seems that by pinning your batch in cpu memory, the data transfer to GPU can be much faster.

Then comes the question: why by default pin_memory is False in DataLoader? I tried to recall the minimal knowledge learned from operation system classes. Does pin-memory indicate that once a batch is pinned, it will always stay in the memory until the process ends?

RuntimeError: reduce failed to get memory buffer: out of memory - After 30,000 iterations

pinned memory is page-locked memory. It is easy for users to shoot themselves in the foot if they enable page-locked memory for everything, because it cant be pre-empted. That is why we did not make it default True

(Yaozong Gao) #3

Could you suggest scenarios when we should use pin_memory?


try to use pin_memory, but if you are seeing system freeze or swap being used a lot, disable it.


can pin memory cause getting out of memory error for GPUs?

(jdhao) #8

I am seeing this error when using pin_memory=True in dataloader.

1 Like
(Jacky) #9

Yeah, some problem may cause by the pin_memory