Question about RNN Implementation

An example RNN implementation contains this code for the training section:

Train the model

n_total_steps = len(train_loader)

for epoch in range(num_epochs):

for i, (images, labels) in enumerate(train_loader):  

    # origin shape: [N, 1, 28, 28]

    # resized: [N, 28, 28]

    images = images.reshape(-1, sequence_length, input_size).to(device)

    labels = labels.to(device)

    

    # Forward pass

    outputs = model(images)

    loss = criterion(outputs, labels)

    

    # Backward and optimize

    optimizer.zero_grad()

    loss.backward()

    optimizer.step()

    

    if (i+1) % 100 == 0:

        print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')

My question is this: here, all entries within a sequences have the same label. This was an example where MNIST was used, and a “sequence” was a row of an image. So, it makes sense that every “column” has the same image classification. However, I’m trying to make an implementation such that each entry in a sequence can have a different output and that subsequent outputs depend on the predictions of previous outputs.

Does anyone know of any implementations of RNNs that have this?