For reference, calling numpy on a cpu tensor returns an array that shares the same buffer with the tensor. No data is copied, type is the same.
getsizeof internally calls the __sizeof__ method on the object, then it adds the garbage collection overhead to it and returns. Apparently __sizeof__ on PyTorch tensors doesn’t take the size of the underlying buffer into account: