Request Advice on Feature Importance


Will anyone give some advice on how to get feature importance out of a model? I have seen partial dependence plots used in tree based models but I need to be able to use it with a pytorch model I have developed.

Any advice on how I can figure out which of my variables are the most important?

1 Like


I assume that the prediction of your model is differentiable with respect to the input. If so, you can compute the expectation of the absolute value of the gradient for the variables. Check this paper out:

I haven’t yet tried it, but github has pytorch/captum project for this,