How is tensor storage size determined

In what cases does the underlying storage size not equal t.numel() * t.element_size()?

A known case is a tensor with all strides equal to 0.

>>> t = torch.empty_strided(size=(3,3),stride=(0,0),dtype=torch.float32)
>>> t.storage().nbytes() 
4

looks like the storage size of tensors created via empty_strided is given by “1 bigger than the offset of the last element according to stride”

And the size of a contiguous tensor and views of such tensors is given by the product along the dimensions