What should I do to deal with this problem?
Thank you very much in advance!
What should I do to deal with this problem?
Thank you very much in advance!
I cannot reproduce the issue in a recent master build and am able to save a tensor of e.g. 16GB using:
>>> import torch
>>> x = torch.randn(int(4*1024**3), device='cuda')
>>> print(torch.cuda.memory_allocated()/1024**3)
16.0
>>> torch.save(x, 'tmp.pt')
>>> y = torch.load('tmp.pt')
>>> print(y.shape)
torch.Size([4294967296])
I’m not sure if this issue might be Windows-specific, but could you update PyTorch to the latest nightly release and check, if this might have been an already known and fixed issue, please?
I will try according to your suggestion. Thank you for your help!