What interpretability methods to you use?

I am trying to get a sense of the local and global interpretability methods most commonly used with pytorch models. What methods do you use and why do you use them? What problem are you trying to solve with them? What is your greatest frustration with these tools?