not save the whole tensor if the tensor takes more than 10 GB storage

What should I do to deal with this problem?

Thank you very much in advance!

I cannot reproduce the issue in a recent master build and am able to save a tensor of e.g. 16GB using:

>>> import torch
>>> x = torch.randn(int(4*1024**3), device='cuda')
>>> print(torch.cuda.memory_allocated()/1024**3)
>>>, '')
>>> y = torch.load('')
>>> print(y.shape)

I used the same operation as you, but an error occurred when I load the built tensor.

I’m not sure if this issue might be Windows-specific, but could you update PyTorch to the latest nightly release and check, if this might have been an already known and fixed issue, please?

I will try according to your suggestion. Thank you for your help!