Hello,
at the moment I am working on a project, where I try to use IMU-Measurements (linear accl. and angular velocity) and a LSTM based Netowork to predict the translation and orientation of a mobile agent, in this specific case I use KITTI Dataset, you can find the project here (DeepLIO) (checkout branch deepio). The network is built as follow:
class DeepIO(nn.Module):
def __init__(self):
super(DeepIO, self).__init__()
self.bidirectional = False
self.num_dir = 2 if self.bidirectional else 1
self.rnn = nn.LSTM(input_size=6, hidden_size=512,
num_layers=2, batch_first=True, bidirectional=self.bidirectional)
self.drop_out = nn.Dropout(0.25)
self.fc1 = nn.Linear(512, 256)
self.bn1 = nn.BatchNorm1d(256)
self.fc_pos = nn.Linear(256, 3)
self.fc_ori = nn.Linear(256, 4)
def forward(self, x):
"""
args:
x: a list (batch) of sequences with different length
"""
lengths = [x_.size(0) for x_ in x] # get the length of each sequence in the batch
x_padded = nn.utils.rnn.pad_sequence(x, batch_first=True) # padd all sequences
b, s, n = x_padded.shape
# pack padded sequece
x_padded = nn.utils.rnn.pack_padded_sequence(x_padded, lengths=lengths, batch_first=True, enforce_sorted=False)
# calc the feature vector from the latent space
out, hidden = self.rnn(x_padded)
# unpack the featrue vector
out, lens_unpacked = nn.utils.rnn.pad_packed_sequence(out, batch_first=True)
out = out.view(b, s, self.num_dir, self.hidden_size[0])
# many-to-one rnn, get the last result
y = out[:, -1, 0]
y = F.relu(self.fc1(y), inplace=True)
y = self.bn1(y)
y = self.drop_out(y)
x_pos = self.fc_pos(y)
x_ori = self.fc_ori(y)
return x_pos, x_ori
Where the input to the network is a list of sequences with different length, e.g. BxTx6, where B ist the batch-size (the list) and T is the sequence length of each sequence and 6 is are the length of each IMU
measurement (3xlinar accl, 3xangualr velocity).
I would like to know if the way I handle these sequences with different lengths is correct?
Where I first pad them with zero, so they have the same length, after that, I pack them and pass to the RNN (LSTM) and at the end repack them with many-to-one output.
Thanks
Arash