A three dim input to pytorch

It is very probable that I couldn’t find the thread I needed, but I thought I should still have this question here.
I’d like to use a three dim input and output.
Say I have N samples of sentences of V (which is a varying length of sentences) and the corresponding embedding if size E to each of the words in the sentences (NVE). It seems that pytorch needs me to reshape my data to VNE if I want it to be processed. It’s really weird for me and I’m not sure I know how to tackle it. I tried using .view but I thought it messed up things and did not result the way I wanted.
I don’t want for this solution to be with any embedding layer, I want pytorch to accept a 3 dim input.
a) Is NVE a correct way for the input to be in, and I was wrong?
b) If I was right, how can I reshape the dim?
I appreciate your help :slight_smile:

a) VNE (typically referred to as (seq, batch, feature) in the documentation) is indeed the default, though e.g. nn.RNN has a batch_first argument that changes this.
b) I think you want torch.transpose.

Best regards