Torch.empty() does not allocate memory?

I found out that calling torch.empty() does not allocate memory to fit future data. Is it an intended behavior? I believed that torch.empty() works like any other construction methods, but without filling in any values.

Docs says:

Returns a tensor filled with uninitialized data.

Which is ambiguous to me.

torch.empty does allocate memory, but does not initialize this memory space.
I.e. the values in your tensor will just be interpreted as the bits which are currently in the allocated memory space.

Are you sure?

Here’s the output from the memory profiler:

Line #    Mem usage    Increment   Line Contents
     5     53.1 MiB     53.1 MiB   @profile
     6                             def main():
     7     53.2 MiB      0.1 MiB       t = torch.empty((1000000, 1000), dtype=torch.float, device='cpu')
     8     53.2 MiB      0.0 MiB       del t
     9   3868.3 MiB   3815.1 MiB       t = torch.zeros((1000000, 1000), dtype=torch.float, device='cpu')
    10     53.6 MiB      0.0 MiB       del t

Yes, otherwise you couldn’t index the tensor or work in some other way.
I’m not sure, how the memory profiler works, but using the process_memory_info() data you’ll get the expected result:

import torch

import os
import psutil
process = psutil.Process(os.getpid())
mem_t0 = process.memory_info().data

x = torch.empty(1000, 1000, dtype=torch.float32)

mem_t1 =  process.memory_info().data
mem_expected = x.numel() * 4 /1024 # in kBytes
print('Expected {} kBytes'.format(mem_expected))
print('Actual {} kBytes'.format((mem_t1 - mem_t0) / 1024))
> Expected 3906.25 kBytes
> Actual 3908.0 kBytes