torch.FloatTensor and torch.tensor returns different shape of tensor


Recently, I’ve been facing the same issue as in How to fix Mismatch in shape when using .backward() function

Originally, I use torch(.cuda).FloatTensor to create the tensor. Comparing it to the solution on the post, this is what I got

# using FloatTensor
a_one = torch.FloatTensor(1)
>>> tensor([0.])
>>> torch.float32

# using torch.tensor
b_one = torch.tensor
>>> tensor(1.)
>>> torch.float32

Can someone explain how both of this tensor initialization returns different shape of tensor?

In torch.FloatTensor the parameter you put into it is the shape not the values in the tensor while in torch.tensor you pass in the values of the tensor not the shape.

1 Like

But if you put this in, it return the same dimension as torch.FloatTensor(1) except now it has value?

a = torch.FloatTensor([1])
>>> tensor([1.])

If you put it as a list than FloatTensor uses it as an input instead of a shape.

The short answer is: don’t use torch.FloatTensor to create tensors, but stick to the factory methods, such as torch.randn, torch.ones, torch.empty, torch.tensor, etc.
The former approach might have unexpected behavior (as see in this topic), could yield an uninitialized tensor (as explained by @Dwight_Foster) etc.