Loss.backward runtime error

I’m getting the error:

Traceback (most recent call last):
  File "NN.py", line 57, in <module>
    loss.backward()
  File "/home/arvind/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/arvind/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

This is the relevant code.

x_train=torch.empty(train_size,5,requires_grad=True)
x_train=x_total[1:train_size,:]
y_train=torch.empty(train_size,1,requires_grad=True)
y_train=y[1:train_size]

model = nn.Sequential(nn.Linear(n_in, n_h),
   #nn.ReLU(),
   nn.Linear(n_h, n_out),
   nn.Sigmoid())

criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)

for epoch in range(50):
   y_pred = model(x_train)

   y_pred = torch.Tensor(list(y_pred))
   y_train = torch.Tensor(list(y_train))
   loss = criterion(y_pred, y_train)
   print('epoch: ', epoch,' loss: ', loss.item())

   optimizer.zero_grad()

   #loss.requires_grad = True
   loss.backward()

   optimizer.step()

Hi,

When you do y_pred = torch.Tensor(list(y_pred)) you convert the Tensor to a python list and then back to a Tensor. But we cannot track gradients in the python list and so this breaks the gradient computation.
You can just use y_pred as is. Or if you want to make it 1D use y_pred = y_pred.view(-1).