# 0 dimension tensor

I am using torch v0.4

1. Why I got 0 dimension tensor in pytorch?
2. Why do we need it?

Below is what I did in jupyter notebook (I found this happen while running some codes sometimes though I cannot find a way to creat a 0-dim tensor myself)

print(class_correct[0].class)
print(class_correct[0].dim())
print(class_correct[0])

<class ‘torch.Tensor’>
0
tensor(48, dtype=torch.uint8)

In v0.4 it is easy to create a 0 size tensor as in:

a = torch.Tensor()

However calling size() gives 0, while calling dim() yields 1. How is class_correct defined?

codes:

# 0-dim tensor

a = torch.Tensor(3)
print(a[0])
print(a[0].dim())

# 1-dim tensor

b = torch.Tensor(3,1)
print(b[0])
print(b[0].dim())

output:
tensor(0.)
0
tensor([ 0.])
1

I found this when I was running one of the old version pytorch tutorials with newer version of pytorch. It seems the part in the tutorial is fixed now but I am still wondering why it changes like so.

For 0.4 above doesn’t work. To create a 0-dim tensor (i.e. scaler tensor as opposed to vector of 1 dimension), do this:

`a = torch.tensor(3)`

Yes capital T makes all the difference :).

`torch.Tensor` is an alias for the default tensor type ( `torch.FloatTensor` ).

A tensor can be constructed from a Python `list` or sequence using the `torch.tensor()` constructor

5 Likes

Do you know when we need this 0-dim tensor by any chance?

1 Like

In many API interfaces, caller may expect Tensor type so if you want to return scalar as tensor then you need to convert it to Tensor when returning.

Assume that there is an operation of concatenating some tensor B = [b1, b2, …, bn] (bi is vector) to another tensor A. When you don’t want B, you can simply set n = 0 without checking the shape of B, if PyTorch supports 0-dim tensor correctly.