One of the variables needed for gradient has been modified by an inplace

When I set on the detection I get the trace I seen this sentence went wrong

alphas[t] = torch.logsumexp((potential[t]*alphas[t-1].unsqueeze(dim = 1)),dim = 0)

How to fix it,does this sentence use inplace?
One of the variables needed for gradient has been modified by an inplace

alphas[t] modifies alphas inplace.
I’m not sure what your code is exactly doing, but maybe storing the results in a temporal list and creating a tensor with torch.cat would work for you.

My code is much like RNN
that is :

for t in length_sequence:
  alphas[t] = torch.logsumexp((potential[t]*alphas[t-1].unsqueeze(dim = 1)),dim = 0)

Where potential is weight of RNN cell t (which require grad) and alpha[t] is time t output.
After that I compute loss on alpha and backward to change the weight of RNN

You mean store alpha in a list after that convert it to tensor?

After cat it works ,but gradient for potential is None,I want gradient for potential since I need to optimize this parameter

torch.cat will not block the gradient flow:

alphas = [torch.ones(1)]
potential = torch.randn(5, requires_grad=True)
for t in range(5):
  alphas.append(
      torch.logsumexp(
          (potential[t]*alphas[t-1].unsqueeze(dim = 1)),
          dim = 0))

alphas = torch.cat(alphas)
alphas.mean().backward()

print(potential.grad)
> tensor([ 0.5746,  0.2980, -0.0386,  0.0045, -0.0703])
1 Like

It works So thanks !!!:+1:
By the way do you have some way to systematically learn pytorch.I have read the recently published tutorial and I want to learn more.

Good to hear it’s working!
Which tutorial are you referring to?
I would recommend to skim through all tutorials on our website and then pick a use case to get your hands dirty. If you get stuck or don’t know how to use specific methods, try to search for similar questions here, and please feel free to ask here in case you cannot find a good reference. :slight_smile:

1 Like

Deep-Learning-with-PyTorch this book