Batch dimension for calculating loss

I’m attempting to train an RNN, and am unsure how to handle batches when calculating the loss.

A criterion such as nn.MSELoss() requires the prediction and target to be of the same shape.

For an RNN, ‘output’ is given as (seq_len, batch, num_directions * hidden_size) with ‘batch’ no longer represented (as per the documentation).

I’ve shaped my target data/ labels to be the same as ‘input’: (seq_len, batch, input_size), so I have a target value for each batch and feature.

Thanks in advance!

What do you mean the batch is no longer represented? It’s in the second dimension :). If you need it in the first one you could simply do output.transpose(0,1)