CPU memory allocation when using a GPU

Hi,
I have a question regarding allocation of RAM/virtual memory (Not GPU memory) when torch.cuda.init() is called

If i use the code

import torch
torch.cuda.init() 

The virtual memory usage goes up to about 10GB, and 135M in RAM (from almost non-existing).

If I then run

torch.rand((256, 256)).cuda()

The virtual memory used is increased to 15.5GB, and 2GB in RAM.

What is the reason behind this?

Is there any way to prevent it?

If needed:
torch version: 1.0.0
cuda: 8.0
Python: 3.6.5

Hi,

That looks like this github issue no?

Yes, this seems to be the same issue!

After doing more testing it seems that the amount of memory allocated after calling torch.cuda.init() depends on the numbers of GPUs visible. (With 1xP100 about 10GB is allocated. With 2xP100 about 18GB is allocated…)

Did you find any workarounds for this issue?

I’m afraid not :confused:
The fact that it uses a lot of virtual memory is expected because it use that for GPU memory management from what I remember. And you should have plenty of virtual memory to spare anyway :smiley:
The fact that it uses 2GB of RAM is a bit but I’m not sure what is the root reason for it.