I am using integrated gradients for feature attributions on a model trained using DP-SGD with opacus library. However, calling attribute() throws the exception below. Calling the same on a non-private model works without any issues. Any hints on why captum throws this error on a private model will be much appreciated.
attributions_train, delta_train = ig.attribute(input_train, baseline_train, target=0, return_convergence_delta=True)
File "/lib/python3.11/site-packages/captum/log/__init__.py", line 42, in wrapper
return func(*args, **kwargs)
File "/lib/python3.11/site-packages/captum/attr/_core/integrated_gradients.py", line 286, in attribute
attributions = self._attribute(
File "/lib/python3.11/site-packages/captum/attr/_core/integrated_gradients.py", line 351, in _attribute
grads = self.gradient_func(
File "/lib/python3.11/site-packages/captum/_utils/gradient.py", line 119, in compute_gradients
grads = torch.autograd.grad(torch.unbind(outputs), inputs)
File "/lib/python3.11/site-packages/torch/autograd/__init__.py", line 303, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/lib/python3.11/site-packages/torch/nn/modules/module.py", line 69, in __call__
return self.hook(module, *args, **kwargs)
File "/lib/python3.11/site-packages/opacus/grad_sample/grad_sample_module.py", line 326, in capture_backprops_hook
activations, backprops = self.rearrange_grad_samples(
File "/python3.11/site-packages/opacus/grad_sample/grad_sample_module.py", line 388, in rearrange_grad_samples
activations = module.activations.pop()
IndexError: pop from empty list