RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 561297468575040 bytes. Error code 12 (Cannot allocate memory)

Hi,
I am trying to load a tensor using torch.load(), but I get the problem of memory allocation and it seems a little bit weird to allocate such a massive memory for a tensor of size 500.
does anyone have an idea about what might be the reason?
thank you!

to save the mean tensor:

save_dir = "Statistics"
        if not os.path.isdir(save_dir):
            os.makedirs(save_dir)
file_path = os.path.join(save_dir, 'tensor-{:04d}.pt'.format(cpt))
torch.save(mean, file_path)

to load the tensor:

file_path = os.path.join(save_dir, 'tensor-0013.pt')  
t =  torch.load(file_path)

Could you post the dtype and shape of mean as well as your PyTorch version, please?

Thank you for your answer.
here are the infos :

dtype of the mean var :  torch.FloatTensor
shape of the mean var :  torch.Size([100])
MEAN TENSOR  :    tensor([ 0.0010, -0.0992,  0.0625,  0.0894,  0.0841, -0.0421, -0.1011, -0.0534,
        -0.0956, -0.0805,  0.0222, -0.0536, -0.1080,  0.1050,  0.0494,  0.0396,
         0.0343,  0.0795, -0.0132, -0.0265, -0.0385, -0.1045,  0.0347, -0.0852,
         0.0610,  0.0986, -0.0253, -0.0696,  0.0620,  0.0718, -0.0746, -0.0618,
         0.0109,  0.0349,  0.0634,  0.0511, -0.0147,  0.0587, -0.0750, -0.0771,
         0.0225, -0.0304,  0.0437, -0.0318,  0.0298,  0.0150, -0.0433, -0.0412,
        -0.0048, -0.0955, -0.0041, -0.0831,  0.0162,  0.0312,  0.0048,  0.0056,
         0.1025, -0.0540,  0.0994, -0.0289, -0.1145, -0.0625,  0.0519, -0.0754,
        -0.0544, -0.0007,  0.0830,  0.0534,  0.0045,  0.0124,  0.0085, -0.0708,
        -0.0196, -0.0293,  0.0667,  0.0851, -0.0572, -0.0519,  0.0171,  0.0516,
         0.0527,  0.0306,  0.0638,  0.0480, -0.0601, -0.0722,  0.0703, -0.0403,
         0.0742,  0.1013, -0.0990, -0.0515,  0.0568, -0.0128,  0.0558, -0.1101,
         0.0234,  0.0263,  0.0511,  0.0509], grad_fn=<DivBackward0>)

and I am using PyTorch 1.4.0

Are you able to reproduce the error with this code snippet?

x = torch.tensor([ 0.0010, -0.0992,  0.0625,  0.0894,  0.0841, -0.0421, -0.1011, -0.0534,
        -0.0956, -0.0805,  0.0222, -0.0536, -0.1080,  0.1050,  0.0494,  0.0396,
         0.0343,  0.0795, -0.0132, -0.0265, -0.0385, -0.1045,  0.0347, -0.0852,
         0.0610,  0.0986, -0.0253, -0.0696,  0.0620,  0.0718, -0.0746, -0.0618,
         0.0109,  0.0349,  0.0634,  0.0511, -0.0147,  0.0587, -0.0750, -0.0771,
         0.0225, -0.0304,  0.0437, -0.0318,  0.0298,  0.0150, -0.0433, -0.0412,
        -0.0048, -0.0955, -0.0041, -0.0831,  0.0162,  0.0312,  0.0048,  0.0056,
         0.1025, -0.0540,  0.0994, -0.0289, -0.1145, -0.0625,  0.0519, -0.0754,
        -0.0544, -0.0007,  0.0830,  0.0534,  0.0045,  0.0124,  0.0085, -0.0708,
        -0.0196, -0.0293,  0.0667,  0.0851, -0.0572, -0.0519,  0.0171,  0.0516,
         0.0527,  0.0306,  0.0638,  0.0480, -0.0601, -0.0722,  0.0703, -0.0403,
         0.0742,  0.1013, -0.0990, -0.0515,  0.0568, -0.0128,  0.0558, -0.1101,
         0.0234,  0.0263,  0.0511,  0.0509])

torch.save(x, 'tmp.pt')
y = torch.load('tmp.pt')

If not, is this error deterministically raised at a specific code part?

this code snippet works fine and I am able to load the tensor.
So what I do iis that during training I compute batchNorm statistics and I store them using torch.save() and I get no error during this phase but when I came to load it (I forgot to mention this) some tensors are loaded and some are not and it does not depend on the size(sometimes a tensor of size 500 is loaded and other with size 100 is not)