Torch.load cause CPU memory increase when model eval

Hi all, I find a question that the CPU memory increase shapely when I use torch.load() for model eval. I use a GPU so the map_location was set as “cuda : 0”. I wonder why the CPU memory cost so much. Thanks !

Try doing this:

Load all tensors onto GPU 1

torch.load(‘tensors.pt’, map_location=lambda storage, loc: storage.cuda(1))