CUDA initializatiom RAM usage

import os, psutil, torch

process = psutil.Process(os.getpid())

print(process.memory_info()[0] / float(2**20))
cuda_tensor = torch.cuda.FloatTensor([0.0])
print(process.memory_info()[0] / float(2**20))

The above snippet outputs ~90MB before creating the tensor and ~1280MB after creation, and nvidia-smi shows ~281MB usage on the GPU. Multiple topics on discuss explain the GPU RAM usage as overhead for various contexts, but I am puzzled by the very high CPU RAM usage. Is this the expected amount of RAM needed for using PyTorch’s CUDA module, and why does it use so much?

I tested this snippet on ubuntu 14.04/16.04 using pytorch 0.2.0/0.3.0 and CUDA 8/9.