Torch Tensor with same shape but different storage size

I’m working on GANs model, the generator creates a tensor with size (3,128,128) which I dumped with the pseudo-code

import torch
image = Generator(noise).clone()
tensor = image[0].detach().cpu()
torch.save(tensor, save_path)

The problem is that tensor costs more storage than the same tensor size, even they have the same shape and dtype

>>> import sys
>>> import torch
>>> tensor = tensor.load(save_path)
>>> rand_t = torch.randn(tensor.shape)
>>> print(tensor.shape, rand_t.shape)
torch.Size([3, 128, 128]) torch.Size([3, 128, 128])
>>> print(tensor.dtype, rand_t.dtype)
torch.float32 torch.float32
>>> print(sys.getsizeof(tensor.storage()))
9830472
>>> print(sys.getsizeof(rand_t.storage()))
196680

I tried to dump those tensors, the output from the generator took 9.2MB and the random tensor took 197.4kB. I did read the pytorch’s documents but found nothing. Please help me to figure out what is the difference between them?

Extracting the sub-tensor directly from the origin will bring the whole storage of the origin.

>>> import sys
>>> import torch
>>> tensor = torch.randn(10,3,128,128)
>>> sys.getsizeof(tensor.storage())
1966144
>>> sub1 = tensor[0]
>>> sub1.shape
torch.Size([3, 128, 128])
>>> sys.getsizeof(sub1.storage())
1966144
>>> sub2 = tensor[0].clone()
>>> sub2.shape
torch.Size([3, 128, 128])
>>> sys.getsizeof(sub2.storage())
196672

Therefore, in my case, I need to clone the image from the generator.:

import torch
image = Generator(noise).clone() 
tensor = image[0].detach().clone().cpu() # using clone()
torch.save(tensor, save_path)
  • There are some mistakes in my questions that might confuse the reader, I apologized for that.