Confusion regarding RNN example in docs

Here is an example of RNN from scratch from Pytorch docs (scroll down to example 2):

https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html

However, I thought RNN was supposed to have a new feature input at each TIME_STEP instead of just loading all of the features in at one time.

So using the example from the tutorial above:
instead of
batch = torch.randn(batch_size, 50)
being fed into with all 50 sequence features at one time, shouldn’t the sequence of features be feed in one at a time?

Would this involve some sort of Pytorch slicing?

The length of one sequence is 50. All the 50 values in the sequence is part of the first time step. The next 50 values will part of the second time step.

Have slightly tweaked the Pytorch example to drive this point home

loss_fn = nn.MSELoss()

batch_size = 10
TIMESTEPS = 5

# Create some fake data
batch = torch.randn(batch_size, 50,5)
hidden = torch.zeros(batch_size, 20)
target = torch.zeros(batch_size, 10)

loss = 0
for t in range(TIMESTEPS):
    # yes! you can reuse the same network several times,
    # sum up the losses, and call backward!
    hidden, output = rnn(batch[:,:,t], hidden)
    loss += loss_fn(output, target)
loss.backward()
1 Like