How can we know what is the total memory allocated for a tensor on gpu? All the below statements return 72. Looks like I am missing something?

```
print(sys.getsizeof(torch.FloatTensor([0.5]).cuda()))
72
print(sys.getsizeof(torch.FloatTensor([0.5])))
72
print(sys.getsizeof(torch.FloatTensor([0.5, 0.7])))
72
print(sys.getsizeof(torch.FloatTensor([0.5, 0.7]).cuda()))
72
```

Or is it safe to calculate that if a float tensor is on gpu, then the memory consumed by the tensor in total is 4 bytes * length_of_tensor ?

It will be also useful to know how to calculate the memory consumed for any object in gpu. Eg: torchtext.data.dataset.Dataset