I get the following error when I use capitalized Tensor

>>> torch.Tensor(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/torch/tensor.py", line 57, in __repr__
return torch._tensor_str._str(self)
File "/usr/lib/python3.6/site-packages/torch/_tensor_str.py", line 218, in _str
fmt, scale, sz = _number_format(self)
File "/usr/lib/python3.6/site-packages/torch/_tensor_str.py", line 96, in _number_format
if value != math.ceil(value.item()):
RuntimeError: Overflow when unpacking long

but no problem if I use torch.tensor(1).

However, one good thing about capitalized Tensor is that I can specify shape like torch.Tensor(2,3) which gives me

@richard So is it faster and more memory efficient to use torch.empty() instead of torch.Tensor(), when defining empty tensors in a class initialization function?

Our torch.Tensor constructor is overloaded to do the same thing as both torch.tensor and torch.empty. We thought this overload would make code confusing, so we split torch.Tensor into torch.tensor and `torch.empty.

So @yxchng yes, to some extent, torch.tensor works similarly to torch.Tensor (when you pass in data). @ProGamerGov no, neither should be more efficient than the other. It’s just that the torch.empty and torch.tensor have a nicer API than our legacy torch.Tensor constructor.

torch.Tensor is a kind of mixture between torch.empty and torch.tensor , but when pass in data, torch.Tensor uses a global default dtype, torch.tensor infers data type from data

One important thing is that torch.tensor(1) gives you a fixed value of 1, while torch.Tensor(1) gives you a tensor of size 1 which is randomly initialized