Feed LSTM multiple thought vectors

hi,
is it possible to feed every time step initial thought vector to predict output instead of hidden input in LSTM cell? or concatenated with hidden input. this is for image captioning (refer to image)

I think it is possible. During training using the LSTM cell (which is in a for loop), just re-fed the same vector instead of the hidden that the model returns as in here.

any sample code pls…