I was wondering what would be the recommended approach to deal with PackedSequence objects in seq2seq models when inferring future values over a variable amount of step. Without, packed sequence and fixed amount of step I’d usually write the following in the forward function of my model:
def forward(self, sequence: torch.Tensor, steps: int, hx: torch.Tensor = None):
hidden_states, hx = self.encoder(sequence, hx)
#...
h_t = hidden_states[:, -1, :].unsqueeze(1)
h_t_inf = []
for _ in range(steps):
h_t, hx = self.decoder(h_t, hx)
h_t_inf.append(self.activation(h_t))
h_t_inf = torch.cat(h_t_inf, axis=1)
forecast = self.output(h_t_inf)
return forecast
However, how would that work if the sequence
object would be a PackedSequence and steps would be a variable number of steps like in a tensor? I’ve been able to deal with PackedSequence using the solution provided with this explanation however, I am not sure how to deal with variable length steps efficiently.
def forward(self, sequence: PackedSequence steps: torch.Tensor, hx: torch.Tensor = None):
#how would you deal with for _ in range(steps) where steps is variable