What is the disadvantage of using pin_memory?

From the official pytorch document (http://pytorch.org/docs/notes/cuda.html#use-pinned-memory-buffers), it seems that by pinning your batch in cpu memory, the data transfer to GPU can be much faster.

Then comes the question: why by default pin_memory is False in DataLoader? I tried to recall the minimal knowledge learned from operation system classes. Does pin-memory indicate that once a batch is pinned, it will always stay in the memory until the process ends?

11 Likes

pinned memory is page-locked memory. It is easy for users to shoot themselves in the foot if they enable page-locked memory for everything, because it cant be pre-empted. That is why we did not make it default True

11 Likes

Could you suggest scenarios when we should use pin_memory?

try to use pin_memory, but if you are seeing system freeze or swap being used a lot, disable it.

29 Likes

can pin memory cause getting out of memory error for GPUs?

I am seeing this error when using pin_memory=True in dataloader.

1 Like

Yeah, some problem may cause by the pin_memory