Please help me with this error

Hello, everyone
I try to run LSTM for multivariate time series data

But the source code below causes error
which says
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

Source code is as below
Thank you in advance

class LSTMModel(torch.nn.Module):
    def __init__(self,n_features,seq_length):
        super(LSTMModel, self).__init__()
        self.n_features = n_features
        self.seq_len = seq_length
        self.n_hidden = n_features # number of hidden states
        self.n_layers = 1 # number of LSTM layers (stacked)
    
        self.l_gru = torch.nn.LSTM(input_size = n_features, 
                                 hidden_size = self.n_hidden,
                                 num_layers = self.n_layers, 
                                 batch_first = True)
        self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, 1)
        self.hidden = self.init_hidden(batch_size)
        

    def init_hidden(self, batch_size):
        # even with batch_first = True this remains same as docs
        hidden_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)
        cell_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)
        return (hidden_state, cell_state)
    
    
    def forward(self, x):        
        batch_size, seq_len, _ = x.size()
        lstm_out, self.hidden = self.l_gru(x,self.hidden)
        x = lstm_out.contiguous().view(batch_size,-1)
        return self.l_linear(x)


mv_net.train()
for t in range(train_episodes):
    for b in range(0,len(X),batch_size):
        inpt = X[b:b+batch_size,:,:]
        target = y[b:b+batch_size]    
        
        x_batch = torch.tensor(inpt,dtype=torch.float32)    
        y_batch = torch.tensor(target,dtype=torch.float32)
    
        #mv_net.init_hidden(x_batch.size(0))
        output = mv_net(x_batch) 
        loss = criterion(output.view(-1), y_batch)
        
        loss.backward()
        optimizer.step()        
        optimizer.zero_grad() 
    print('step : ' , t , 'loss : ' , loss.item())

Hi,

I think you don’t handle the hidden state properly.
In particular, I think you should be resetting self.hidden when you get a new sequence.

Hello, albanD
Thank you for your nice answer.
Now I figured out what the problem is.

But may I ask you one more question?
Why should I reset self.hidden everytime?
I don’t understand.

Thank you and have a nice day, albanD

Hi,

You might want to look at other code that runs lstm as I don’t know :confused:
Maybe this tutorial will help.

Hello, albanD
Thank you for your help
LSTM is not easy for me
I will study more as you suggest
Have a nice day, albanD