From_numpy vs as_tensor

According to my understanding, when a is ndarray, torch.from_numpy(a) is same as torch.as_tensor(a), and they don’t copy the data just share the same memory, so I think they are same speed when applied, but I test that from_numpy faster than as_tensor. Could someone explaine for me? Thanks a lot first

2 Likes

Yes, both approaches share the underlying memory.
torch.as_tensor accepts a bit more that torch.from_numpy, such as list objects, and might thus have a slightly higher overhead for these checks.

8 Likes

Thank you for your reply!

as_tensor() returns a tensor on the default device whereas from_numpy() returns a tensor on the cpu:

torch.set_default_tensor_type(torch.cuda.FloatTensor)
torch.as_tensor(np_array).device
>>> device(type='cuda', index=0)
torch.from_numpy(np_array).device
>>> device(type='cpu')
3 Likes