Hi, I am trying to implement visual inertial net using RNN.

The code below is intended to take 10 sequential IMU data(6 dimensions each) at a time and calculate relative accumulated 6 dof motion.

So, what I tried to build is a RNN with 6 input that takes 10 sequences to calculate 6 dof output. The hidden states should be passed to the next stage to calculate next 6 dof pose. At the very beginning, zeros are passed to the hidden states as a signal of the start.

While running the code I have got the following error.

*** RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

I have carefully read the related post here, but it is not clear how to train my RNN with infinite sequences of data such as my case.

```
class Vinet(nn.Module):
def __init__(self, batchNorm=True, div_flow = 20):
super(Vinet, self).__init__()
self.rnnIMU = nn.LSTM(
input_size=6,
hidden_size=6,
num_layers=2,
batch_first=True)
self.linear1 = nn.Linear(6, 512)
self.linear2 = nn.Linear(512, 128)
self.linear3 = nn.Linear(128, 6)
self.linear1.cuda()
self.linear2.cuda()
self.linear3.cuda()
def forward(self, imu, imu_h_in, imu_c_in):
imu_out, (imu_h_out, imu_c_out) = self.rnnIMU(imu.view(1,-1,6),(imu_h_in, imu_c_in))
imu_out = imu_out[:, -1, :]
l_out1 = self.linear1(imu_out.cuda())
l_out2 = self.linear2(l_out1)
l_out3 = self.linear3(l_out2)
return l_out3, imu_h_out, imu_c_out
imu_h_prev = Variable(torch.zeros(2, 1, 6))
imu_c_prev = Variable(torch.zeros(2, 1, 6))
for k in range(epoch):
for i in range(start, end):#len(mydataset)-1):
data_imu, target_f2f, target_global = mydataset.load_img_bat(i, batch)
optimizer.zero_grad()
imu_h=imu_h_prev
imu_c=imu_c_prev
## Forward
output,imu_h_out,imu_c_out = model(data_imu,imu_h,imu_c)
loss = criterion(output.cpu(), target_f2f)
loss.backward(retain_graph=True)
optimizer.step()
imu_h_prev=imu_h_out
imu_c_prev=imu_c_out
```