Shap values, RuntimeError: grad can be implicitly created only for scalar outputs'''

after i trained the model and try to use shap explainer this line rasie an error
‘’‘shap_values = e.shap_values(sequences_to_explain) ‘’’

‘’‘raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
RuntimeError: grad can be implicitly created only for scalar outputs’’’

I already looked at other posts that said it related to loss.backward but the answers did not work for me.

Any suggestions

I don’t know what shap_values is calling internally, but the error message indicates that backward() is called on a tensor, which contains more than a single element (i.e. isn’t a scalar tensor).
In these cases you would either need to reduce the tensor to a scalar (e.g. via torch.mean) or would need to provide a gradient tensor to backward via tensor.backward(grad), where grad has the same shape as tensor.

@ptrblck Thank you so much for you reply, I tried that answer but I got same error
Here is the full error
‘’’ File “/mnt/train_model.py”, line 247, in
shap_values = e.shap_values(sequence)
File “/home/Guest/.local/lib/python3.6/site-packages/shap/explainers/_deep/init.py”, line 124, in shap_values
return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)
File “/home/Guest/.local/lib/python3.6/site-packages/shap/explainers/_deep/deep_pytorch.py”, line 185, in shap_values
sample_phis = self.gradient(feature_ind, joint_x)
File "/home/Guest/.local/lib/python3.6/site-packages/shap/explainers/deep/deep_pytorch.py", line 123, in gradient
allow_unused=True)[0]
File “/home//Guest/.local/lib/python3.6/site-packages/torch/autograd/init.py”, line 229, in grad
grad_outputs
= make_grads(outputs, grad_outputs)
File “/home//Guest/.local/lib/python3.6/site-packages/torch/autograd/init.py”, line 51, in _make_grads
raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
RuntimeError: grad can be implicitly created only for scalar outputs ‘’’’

It’s hard to tell why the code is failing as it seems that self.gradient is called internally in:

File “/home/Guest/.local/lib/python3.6/site-packages/shap/explainers/_deep/deep_pytorch.py”, line 185, 

which points to the used shap package. Could you cross-post the issue in their GitHub so that the code owners could also check if this is a known issue or is your usage of the package might be wrong?