RuntimeError Traceback (most recent call last)
in
30 # Backward pass
31 torch.set_default_dtype(torch.float64)
—> 32 loss.backward()
33 # Update parameters and take a step using the computed gradient
34 optimizer.step()
I assume this could create type mismatches, since the forward pass wouldn’t match the backward pass anymore.
Could you explain, why you are changing the default dtype globally during training?
I copied an error from a test… I was just trying something out… My mistake. Here is the error:
Epoch: 0%| | 0/4 [00:00<?, ?it/s]
RuntimeError Traceback (most recent call last)
in
29 train_loss_set.append(loss.item())
30 # Backward pass
—> 31 loss.backward()
32 # Update parameters and take a step using the computed gradient
33 optimizer.step()