alphas[t] modifies alphas inplace.
I’m not sure what your code is exactly doing, but maybe storing the results in a temporal list and creating a tensor with torch.cat would work for you.
for t in length_sequence:
alphas[t] = torch.logsumexp((potential[t]*alphas[t-1].unsqueeze(dim = 1)),dim = 0)
Where potential is weight of RNN cell t (which require grad) and alpha[t] is time t output.
After that I compute loss on alpha and backward to change the weight of RNN
You mean store alpha in a list after that convert it to tensor?
It works So thanks !!!
By the way do you have some way to systematically learn pytorch.I have read the recently published tutorial and I want to learn more.
Good to hear it’s working!
Which tutorial are you referring to?
I would recommend to skim through all tutorials on our website and then pick a use case to get your hands dirty. If you get stuck or don’t know how to use specific methods, try to search for similar questions here, and please feel free to ask here in case you cannot find a good reference.