Job crashed in pytorch 1.1.0 because of oom problem

hi,all:
when I set data loader’s num_work > 0,I got this error


it look like out of cpu memory,but when I check memory, it only use 5% memory(but lots of virtual memory)
and I use free command to checkout the memory, i found free part keep decreasing,but buffer/cache part keep increase ,and finally job will crash

I don’t find this problem when I use pytorch 1.0.0,how can I fix it?and by the way,why it use so many virtual memory?