What is the difference between Tensor and tensor? Is Tensor going to be deprecated in the future?

I am using pytorch 0.4.

I get the following error when I use capitalized Tensor

>>> torch.Tensor(1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.6/site-packages/torch/tensor.py", line 57, in __repr__
    return torch._tensor_str._str(self)
  File "/usr/lib/python3.6/site-packages/torch/_tensor_str.py", line 218, in _str
    fmt, scale, sz = _number_format(self)
  File "/usr/lib/python3.6/site-packages/torch/_tensor_str.py", line 96, in _number_format
    if value != math.ceil(value.item()):
RuntimeError: Overflow when unpacking long

but no problem if I use torch.tensor(1).

However, one good thing about capitalized Tensor is that I can specify shape like torch.Tensor(2,3) which gives me

tensor([[-1.0365e+18,  3.0754e-41,  5.5266e+30],
        [ 3.0754e-41,  4.4842e-44,  0.0000e+00]])

I can’t do that with torch.tensor(2,3) which gives me

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: tensor() takes 1 positional argument but 2 were given

Are all these behaviors expected? When should I use which?

5 Likes

There’s a bug in printing tensors where printing a tensor with a very large value causes a Runtime error.

You should use torch.empty in place of torch.Tensor.

torch.tensor is similar with numpy.array and also used to create scalar.

a = torch.tensor([1,2]) # accept list
a = torch.tensor(1) # scalar
a = torch.Tensor(2,3) # Tensor
1 Like

Isn’t torch.tensor also Tensor?

@richard So is it faster and more memory efficient to use torch.empty() instead of torch.Tensor(), when defining empty tensors in a class initialization function?

Our torch.Tensor constructor is overloaded to do the same thing as both torch.tensor and torch.empty. We thought this overload would make code confusing, so we split torch.Tensor into torch.tensor and `torch.empty.

So @yxchng yes, to some extent, torch.tensor works similarly to torch.Tensor (when you pass in data). @ProGamerGov no, neither should be more efficient than the other. It’s just that the torch.empty and torch.tensor have a nicer API than our legacy torch.Tensor constructor.

11 Likes

torch.Tensor is a kind of mixture between torch.empty and torch.tensor , but when pass in data, torch.Tensor uses a global default dtype, torch.tensor infers data type from data

3 Likes

It looks like the torch.tensor is better than torch.Tensor when realize the same function?

I found a case to use Tensor rather than tensor:

torch.Tensor with int / float

In[] : 1/torch.Tensor([1,2,3])
Out[]: tensor([ 1.0000,  0.5000,  0.3333])

torch.tensor with int

In[] : 1/torch.tensor([1,2,3])
------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-231-55a86778a1f4> in <module>()
----> 1 1/torch.tensor([1,2,3]).
.../anaconda3/lib/python3.6/site-packages/torch/tensor.py in __rdiv__(self, other)
    318 
    319     def __rdiv__(self, other):
--> 320         return self.reciprocal() * other
    321     __rtruediv__ = __rdiv__
    322     __itruediv__ = _C._TensorBase.__idiv__

RuntimeError: reciprocal is not implemented for type torch.LongTensor

torch.tensor with float

In []: 1/torch.tensor([1.,2.,3.])
Out[]: tensor([ 1.0000,  0.5000,  0.3333])
1 Like

One important thing is that torch.tensor(1) gives you a fixed value of 1, while torch.Tensor(1) gives you a tensor of size 1 which is randomly initialized

2 Likes

So when do we use which?

1 Like

Well it depends on your use case, as my reply mentioned. If you want to explicitly specify what the tensor is, then use lowercase tensor

Update - Now it considers as integer.

import torch

a = torch.tensor([1, 2, 3])

print(1/a)

Here the output is [1, 0, 0]. Technically // operation is done.

Thanks

answer to the title of the question: