I have specified map_location to GPU:0 and expect that video memory would go high. But I found conventional memory usage is 1.5GB and video memory usage 550MB.
# record memory usage here as baseline.
# the size of model file is about 15MB
self.device = torch.device('cuda:0')
self.model = torch.load('weights/my_model.pt', map_location=self.device)['model'].float().eval()
# check the increment of memory usage here.
Why does torch.load consume so much conventional memory? Is there something wrong in my code?