Pytorch LSTM returns two tensors, why?

I am new to NLP and using LSTM for prediction and classification. The latest pytorch 1.6 uses
(https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging) LSTM via: x, _ = self.lstm(x). In older internet posts, x, self.hidden_cell = self.lstm(x, self.hidden_cell), which does not work now by producing cryptic
"RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. ".

So, my question: is there any useful info in that hidden_cell from LSTM which could be used for better classification and/or prediction?

It returns the shape as seen in the docs.