Hello, here is a sequential process of LSTM implemented in Torch.

As we know that, in each iteration we need to re-used previous hidden states and cell states. In torch, the equal operator synchronizes both tensors, say,

h[{{}, t}] corresponds to self.output[{{}, t}],

h is the current tensor working, self.output is actuyll a place holder.

```
local h, c = self.output, self.cell
h:resize(N, T, H):zero()
c:resize(N, T, H):zero()
local prev_h, prev_c = h0, c0
self.gates:resize(N, T, 4 * H):zero()
for t = 1, T do
local cur_x = x[{{}, t}]
local next_h = h[{{}, t}]
local next_c = c[{{}, t}]
local cur_gates = self.gates[{{}, t}]
cur_gates:addmm(bias_expand, cur_x, Wx)
cur_gates:addmm(prev_h, Wh)
cur_gates[{{}, {1, 3 * H}}]:sigmoid()
cur_gates[{{}, {3 * H + 1, 4 * H}}]:tanh()
local i = cur_gates[{{}, {1, H}}]
local f = cur_gates[{{}, {H + 1, 2 * H}}]
local o = cur_gates[{{}, {2 * H + 1, 3 * H}}]
local g = cur_gates[{{}, {3 * H + 1, 4 * H}}]
next_h:cmul(i, g)
print(self.output[{{}, t}])
next_c:cmul(f, prev_c):add(next_h)
next_h:tanh(next_c):cmul(o)
prev_h, prev_c = next_h, next_c
end
return self.output
```

In PyTorch, this does not work automatically after the initialization step. I know that I can manually explicitly re-assign the tensor by following codes. But it’s not simple as it does in Torch. I’m sure that Pytorh has definitely some similar mechanisms. Please give me some keywords that might help me out.

```
# Initialize hidden stats
h, c = self.output, self.cell
...
h.resize_(N, T, H).zero_()
c.resize_(N, T, H).zero_()
for t in range(T):
cur_x = x[:, t]
next_h = h[:, t]
next_c = c[:, t]
# state update
# i, f, c, g
.....
# store the hidden states (is this replaceable with some inpace operations??)
self.output[:, t] = next_h
self.cell[:, t] = next_c
```

Thanks for any input.