Loss.float().backward() raises "Found dtype Double but expected Float"

I’m explicitly converting my loss (which comes from nn.MSELoss) to type float in this call:

loss.float().backward()

But I’m still getting this error:

Found dtype Double but expected Float

For completeness:

loss.float().backward()
  File "/home/ian/anaconda3/lib/python3.7/site-packages/torch/_tensor.py", line 363, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/ian/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
    allow_unreachable=True, accumulate_grad=True)  # Calls into the C++ engine to run the backward pass
RuntimeError: Found dtype Double but expected Float

What am I doing wrong?

When declaring my Linear layers I needed to cast them to double()