Can't use multi CPUs when I use GPU

I use pytorch to train but I can’t use full of my computer performance.
My computer configuration is:


I try to use pytorch to test a neural network. My input and output image size is 6886881.
When I use CPU to train, it seems it can use multi CPUs. The batch size can be set up to 128.

But when I use GPU to train. The batch size can only be set to 4. If I use larger batch size, there will be error ‘Out of membery.’ I seems only one cpu is used.

I use this code to use CUDA:
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
inputs = x.to(device)
labels = y.to(device)
model = Unet(1, 1).to(device)

What should I do to use the full CPUs when I use CUDA? Do I need to do some extra setup?

The GPU crashes when its RAM is full, unlike CPU which its memory management is made by the operating system (for example). there is not much to do with your batch_size. What you can do is to increase your number of workers, that would collaborate more CPUs.

1 Like

Thank you for your replying. It seems the GPU is not doing its best. I still cannot understand what should I do. Shoud I improve my CPU RAM?

have a look here
https://timdettmers.com/2018/12/16/deep-learning-hardware-guide/