Concatenate embeddings dynamically in Pytorch for NLP

I am trying to perform a sequence labelling task where I have initialized embedding weights using Glove for a word. I also want to incorporate character level features into the model. I am able to train an LSTM over the character vector and save its final state. Should I use this as the character representation instead?

How can I append a character vector of dim say 5, corresponding to each word in the sentence dynamically (i.e, during run time) using Pytorch? So final input will be of dim = glove_dim + character_dim.

P.S: I am trying to recreate code exercise as suggested here.

you can use torch.cat to combine character and word embedding outputs.

Thanks ! I implemented it last week after implementing the character level LSTM code.