System reboot when increase batchsize

I tried to implement SSD by using pytorch code from https://github.com/amdegroot/ssd.pytorch, strange things happened. When I set batch_size=1/2/4, cpu utilization is nearly 99%.(GPUs are used normally)When batch_size increased to 8 or more ,system crashed and reboot automatically. So I only can train it by using 4 batch_size , but finally I got lower mAP of 72.74 (compared with 77.43 from author), I think batch_size influence much but I cannot increase it because of the problem.

So I am really confused why it uses too much CPU in a GPU-used situation? How can I solve it.
When I train other pytorch net , this weird thing doesn’t happened(about 1% utilization CPU used)
Hardware:8x1080Ti, 56 cores ,ubuntu 16.04.

Could the increased batch size also increase the power usage of your GPUs which might crash (a too weak or faulty) PSU?
Could you create a dummy model using all GPUs at high utilizations?
Did you notice something else, e.g. your memory filling up etc.?

Today I found When I set num_workers=2(not 4 in default),batch_size can be set larger than 4 and system won’t reboot automatically. But CPU utilization is still very high.

I think the problem is from dataloader in pytorch 0.4.0,how can I solve it?

Why do you think it’s a problem of the DataLoader in PyTorch 0.4.0?
Id your code running fine in newer versions?