Difference between Tensor and torch.FloatTensor?

When I run the flowing code,

import torch
import torch.nn.functional as F
from torchvision.transforms import transforms
tmp = np.array(Image.open(source_image_path).convert('RGB'))
a = torch.FloatTensor(tmp)
print(a.dtype)
input_data = a.reshape([1, 3, 256, 256])
b = torch.tensor(pixel_flow, requires_grad=True)
grid = b.reshape([1, 256, 256, 2])
c = F.grid_sample(input_data, grid)

I meet this error:
TypeError: FloatSpatialGridSamplerBilinear_updateOutput received an invalid combination of arguments - got (int, Tensor, Tensor, Tensor, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor grid, torch.FloatTensor output, int padding_mode)

So what is the difference between Tensor and FloatTensor?

I know where is wrong, when I modified a and b like this

a = torch.tensor(tmp, requires_grad=False, dtype=torch.float64)
b = torch.tensor(pixel_flow, requires_grad=False, dtype=torch.float64)

it works!

As a small side note even though it’s working now:
If you are dealing with numpy arrays, I would recommend using torch.from_numpy().

1 Like

Thank you very much, and I still find that, when I use torch.from_numpy(), I can set the tensor’s grad by setting
tensor.requires_grad_().
It is really amazing, thank you very much!