Why torch.tensor Scalar Tensor should be used instead of Tensor(1)?

Hi, for Pytorch 0.4, it introduces a new scalar torch.tensor() with dim 0.
I feel confused since all the function of scalar tensor can be replaced by dim=1 Tensor(1).
Why need another new type making more complex for the API?

When you index into a vector, you get a scalar. So scalar support is natural. In fact, not having it is a more complex API because it makes it a special case of indexing.

1 Like

Is there a way to create a scalar other than indexing? Currently I am doing this:

scalar = torch.Tensor([0])[0]

A bit weird, I would say.

scalar = torch.tensor(0)

Nice. I found a problem though:

x = torch.tensor(0)
x
Out[26]: tensor(0)
x += 3.2
x
Out[28]: tensor(3)
x = torch.tensor(0, dtype=torch.float32)
x += 3.2
x
Out[31]: tensor(3.2000)

Isn’t the default dtype supposed to be torch.float32?

The type is inferred from the data you passed.
Try x = torch.tensor(0.).

3 Likes