Type() prints TensorDouble but dtype showing Float

I convert the tensor to Double() after complaint (below) during loss calc but when I print tensor.type() it shows TensorDouble but when I print the content, at the end of list, it still shows dtype=float32.

What are the difference, why is it showing to different? Thanks.,

targets:  <class 'torch.Tensor'> 64 torch.DoubleTensor
 tensor([1.6250, 3.2660, 1.2980, 2.1920, 0.9170, 1.0800, 1.2320, 2.3020, 1.9490,
        1.7990, 3.1560, 1.6250, 1.0340, 2.4040, 1.4870, 0.9850, 1.6060, 3.2130,
        1.4580, 3.2660, 0.9850, 3.6350, 2.2090, 2.7360, 2.6490, 1.6340, 0.9060,
        0.9910, 1.1500, 2.8080, 2.2500, 0.6720, 5.0000, 2.6900, 1.4320, 0.7190,
        2.7820, 0.9790, 1.1320, 1.6310, 3.4060, 1.7070, 1.4060, 3.3280, 2.3270,
        0.9890, 2.6280, 2.3740, 2.6700, 1.1440, 1.1790, 1.4100, 2.4890, 2.3730,
        1.1880, 3.8380, 1.1360, 2.5190, 1.4910, 2.2220, 3.0420, 0.7370, 2.2430,
        1.3810], dtype=torch.float64)

inputs:  <class 'torch.Tensor'> torch.Size([64, 8])
targets:  <class 'torch.Tensor'> torch.Size([64])
Traceback (most recent call last):
  File "p308.py", line 224, in <module>
    train_model(train_dl, model)
  File "p308.py", line 164, in train_model
    yhat = model(inputs[1])
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "p308.py", line 104, in forward
    X = self.hidden1(X.double()).double()
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py", line 96, in forward
    return F.linear(input, self.weight, self.bias)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1847, in linear
    return torch._C._nn.linear(input, weight, bias)
RuntimeError: expected scalar type Double but found Float

Could you check the dtype of the layer’s parameters as well, please?
I would have expected to see the opposite type mismatch in case the layer would be using float32 parameters while the output is in float64, but I also don’t know if the error message was changed recently.

How should I get dtype of layer;s params? Sorry I am stil not familiar with commands and internals of classes.

i did following manipulation and now it works. (convert to float then back to double after training). It probably not the cleanest way to do it so I will look up later to see what i can improve. Now, loss and train is working ok.

    # forward propagate input
    def forward(self, X):
        X = self.hidden1(X.float())
        X = self.act1(X)
        X = self.hidden2(X)
        X = self.act2(X)
        return X.double()

If this code works, it would mean that indeed the model uses float32 for its parameters, which is also the default. You could check it with print(model.hidden1.weight.dtype). However, in my setup at least I’m getting a different order of expected types:

lin = nn.Linear(10, 10) # lin uses float32 parameters
x = torch.randn(1, 10).double()
lin(x)
> RuntimeError: expected scalar type Float but found Double

In any case, make sure the model and input use the same dtype.
If you want to use DoubleTensors, call .double() on the tensors and model.

Thanks it worked. Now my loss are not converging, all over the place!
Tried setting lr and momentum as little as 0.000001, 0.00009 from default: 0.01, 0.9, no avail.
Because of it predicted values are all 0.

I am using housing data with two layers:
linear(8, 30)
linear(30, 1)

with mseloss.