torch version: 1.4.0+cu100
python version: 3.6
Whlile running the code, encountered the error:
RuntimeError: expected dtype Double but got dtype Long
when loss.backward() was executed.
def train(model, train_data, train_target, len_, criterion):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
train_data = torch.tensor(train_data, device=device)
train_target = torch.tensor(train_target, device=device)
model.double()
train_data.double()
train_target.double()
model.train()
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
epochs = 2
for epoch in range(epochs):
print("epoch :", epoch)
# forward pass and loss
y_predicted = model(train_data)
y_predicted.double()
loss = criterion(y_predicted, train_target)
# backward pass
loss.double()
loss.backward()
# update
optimizer.step()
# init optimizer
optimizer.zero_grad()
return
I have tried my best, but still got the same error. How to solve this error?