Captum can not work in Conv1d network?

I built a conv1d neural network to classify 1D data series. The data format is (N, 1, 5000).
The accuracy is ~98%. I tried to use captum to see which portion of the data is more informative. However, I can not make it work. I suspect that captum can not work on conv1d networks?

I followed the tutorial and here is the code:

X_test.requires_grad_()
ig = IntegratedGradients(model)
attributions, delta = ig.attribute(X_test,  target=0, return_convergence_delta=True)
print('IG Attributions:', attributions)
print('Convergence Delta:', delta)

The error msg:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-88-18f4428b41d3> in <module>
      1 X_test.requires_grad_()
      2 ig = IntegratedGradients(model)
----> 3 attributions, delta = ig.attribute(X_test,  target=0, return_convergence_delta=True)
      4 print('IG Attributions:', attributions)
      5 print('Convergence Delta:', delta)

~/anaconda3/lib/python3.7/site-packages/captum/attr/_core/integrated_gradients.py in attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, internal_batch_size, return_convergence_delta)
    282             internal_batch_size=internal_batch_size,
    283             forward_fn=self.forward_func,
--> 284             target_ind=expanded_target,
    285         )
    286 

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/batching.py in _batched_operator(operator, inputs, additional_forward_args, target_ind, internal_batch_size, **kwargs)
    162         )
    163         for input, additional, target in _batched_generator(
--> 164             inputs, additional_forward_args, target_ind, internal_batch_size
    165         )
    166     ]

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/batching.py in <listcomp>(.0)
    161             **kwargs
    162         )
--> 163         for input, additional, target in _batched_generator(
    164             inputs, additional_forward_args, target_ind, internal_batch_size
    165         )

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/gradient.py in compute_gradients(forward_fn, inputs, target_ind, additional_forward_args)
     94     with torch.autograd.set_grad_enabled(True):
     95         # runs forward pass
---> 96         outputs = _run_forward(forward_fn, inputs, target_ind, additional_forward_args)
     97         assert outputs[0].numel() == 1, (
     98             "Target not provided when necessary, cannot"

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/common.py in _run_forward(forward_func, inputs, target, additional_forward_args)
    501         *(*inputs, *additional_forward_args)
    502         if additional_forward_args is not None
--> 503         else inputs
    504     )
    505     return _select_targets(output, target)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    540     def __call__(self, *input, **kwargs):
    541         for hook in self._forward_pre_hooks.values():
--> 542             result = hook(self, input)
    543             if result is not None:
    544                 if not isinstance(result, tuple):

~/anaconda3/lib/python3.7/site-packages/captum/attr/_core/deep_lift.py in pre_hook(module, baseline_inputs_add_args)
    503         def pre_hook(module: Module, baseline_inputs_add_args: Tuple) -> Tuple:
    504             inputs = baseline_inputs_add_args[0]
--> 505             baselines = baseline_inputs_add_args[1]
    506             additional_args = None
    507             if len(baseline_inputs_add_args) > 2:

IndexError: tuple index out of range

Issue has been resolved in the GitHub issue thread (adding link here for future reference).