Pytorch tensor from loss function won't change data type after casting, which causes a bug

I am using the MSE loss function from pytorch nn module to calculate a loss during training as so:

        loss = criterion(prediction, truth)
        loss = loss.to(torch.double)

        optimizer.zero_grad()               
        loss.backward()                      
        optimizer.step()  

but for some reason I keep getting the error:

---> 54             loss.backward()                      
     55             optimizer.step()                    
     56 
    RuntimeError: expected dtype Double but got dtype Long (validate_dtype at /Users/distiller/project/conda/conda-bld/pytorch_1587428061935/work/aten/src/ATen/native/TensorIterator.cpp:143)

Even though I cast my loss to a double, it still won’t work.

For further investigation purposes, when running it in a notebook the following occurs, which may contribute to the problem

print(loss)
--> tensor(841645.2747, dtype=torch.float64, grad_fn=<MseLossBackward>)

l1 = loss.to(torch.double)
print(l1)
--> tensor(841645.2747, dtype=torch.float64, grad_fn=<MseLossBackward>)

l2 = loss.double()
print(l2)
--> tensor(841645.2747, dtype=torch.float64, grad_fn=<MseLossBackward>)

Any ideas?

Are prediction, truth variables holding double tensors?

they weren’t. thank you so much!