CPU memory usage greatly increases when moving model from cpu to gpu

Hi,

I have trained a scaled-yolov4 object detection model in darknet, which I have converted to a pytorch model via GitHub - Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4. When I load the pytorch model onto my CPU, I get a very small increase in CPU memory usage (less than 0.2GB, which is the size of the .weights darknet file). However, when I run the command “model.to(‘cuda:0’)”, I get an increase in CPU memory by 2.5GB, which is strange as (1) shouldn’t it be GPU memory that increases and (2) why is it so much bigger than 0.2GB?

The command I use to obtain the memory usage inside my docker container is os.system(‘cat /sys/fs/cgroup/memory/memory.usage_in_bytes’). I am assuming returns CPU memory usage.

I’ve looked at Why does moving a model to the GPU increase the CPU memory usage? but it was pretty handwavy and I was wondering if there were other explanations for this phenomena. I do see a lot of cuda related libraries being imported, could it be due to that?