RuntimeError: Found dtype Double but expected Float

I have tried a bunch of things I’ve found on forums to fix this issue but nothing is working. I’m getting this error trying to run my model.

Notebook is public here:
https://drive.google.com/file/d/1akU15oJrzFi9KsY3oyrstmvf9tW_8mpS/view?usp=sharing

ERROR:

Epoch: 0%| | 0/4 [00:00<?, ?it/s]

RuntimeError Traceback (most recent call last)
in
30 # Backward pass
31 torch.set_default_dtype(torch.float64)
—> 32 loss.backward()
33 # Update parameters and take a step using the computed gradient
34 optimizer.step()

1 frames
/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable.execution_engine.run_backward(
131 tensors, grad_tensors
, retain_graph, create_graph,
→ 132 allow_unreachable=True) # allow_unreachable flag
133
134

RuntimeError: Found dtype Double but expected Float

I guess the error is raised by changing the default dtype before calling the backward method in:

     31     torch.set_default_dtype(torch.float64)
---> 32     loss.backward()

I assume this could create type mismatches, since the forward pass wouldn’t match the backward pass anymore.
Could you explain, why you are changing the default dtype globally during training?

I copied an error from a test… I was just trying something out… My mistake. Here is the error:

Epoch: 0%| | 0/4 [00:00<?, ?it/s]

RuntimeError Traceback (most recent call last)
in
29 train_loss_set.append(loss.item())
30 # Backward pass
—> 31 loss.backward()
32 # Update parameters and take a step using the computed gradient
33 optimizer.step()

/opt/conda/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
→ 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):

/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable.execution_engine.run_backward(
131 tensors, grad_tensors
, retain_graph, create_graph,
→ 132 allow_unreachable=True) # allow_unreachable flag
133
134

RuntimeError: Found dtype Double but expected Float