How to save the state of LSTM and use it for initialization in next training sample?

I have a dataset which requires the context to be shared between training examples in the given order.

What is the default way LSTM state works across training samples? Would passing the previous state for every training example slow down the training? Are there any better approaches?