Language Models' interpretability

Is there any efficient way to analyze how a piece of knowledge is encoded in LMs?

Honestly. Understanding the machanics of Transformers can really help you. Pytorch Transformer encoder