RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

hi, i have a problem here, i got a sequence of Variables which are the outputs of the bi-directional RNN, and i stacked them into a matrix xs_h whose dimension is (seq_length, batch_size, hidden_size), them i want to update the matrix xs_h by convoluting on two slices in xs_h, some codes are as follows:

new_xs_h = xs_h.clone()
vp, vc = xs_h[idx_0, bidx], xs_h[idx_1, bidx]
x = tc.stack([self.f1(vp), self.f2(vc)], dim=1)[None, :, :]
new_xs_h[idx_1, bidx] = self.tanh(self.l_f2(self.conv(x).squeeze()))

actually, i want to update the Variable xs_h and then let the new updated matrix new_xs_h get into my computation graph again. However, i got following errors when i call backward() after the running of above code:

RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

i do not kown why, any reply will be appreciated.
thanks.

1 Like

It sounds like the problem is that your xs_h don’t have requires_grad=True. Have you tried creating Variables with requires_grad=True?

thanks for reply, the Variable xs_h is not created by myself, it is the output of the Bi-RNN by feeding the embedding of the words. so the requires_grad attribute is False.

Okay. You can make a new Variable with requires_grad = True:

var_xs_h = Variable(xs_h.data, requires_grad=True)
1 Like

Did the suggestion solve your problem? I have the same error thrown at me but the error isn’t very helpful since I don’t know what of my code has the require_gradients set to false. I went ahead and set everything I could find to trainable but it still didn’t fix it…

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
1 Like

For me ‘loss = Variable(loss, requires_grad = True)’ worked.

I was trying to use float type of loss, and it was giving me the same error.

4 Likes

Got the same error in a very different problem and gt_tugsuu 's post helped. thanks

Thank you for your answer

what is the new solution to this now that Variables don’t exist in pytorch anymore?

is this the expected behavior of this:

xt = torch.FloatTensor(x,requires_grad=True)
Traceback (most recent call last):
File “”, line 1, in
TypeError: new() received an invalid combination of arguments - got (list, requires_grad=bool), but expected one of:

  • (torch.device device)
  • (torch.Storage storage)
  • (Tensor other)
  • (tuple of ints size, torch.device device)
    didn’t match because some of the keywords were incorrect: requires_grad
  • (object data, torch.device device)
    didn’t match because some of the keywords were incorrect: requires_grad

Hi richard, I was also getting same error using this it got fixed but now loss is not decreasing with every epochs . It is constant. I think this is related to this variable.

Could you give the link of gt_tugsuu’s post? I meet the problem of getting the same output and loss. Thanks