Implementing Excitation Backprop

I’m implementing excitation backprop (mwp and c-mwp). I’ve read and understood the paper quite well. I’m not able to figure out an efficient way to write a backward pass function for pre-trained models like vgg, inception and resnet.

One way is to store all activations and weights during the forward pass, and then do the backward (top-down) calculations outside the model. Although this will work, I think there must be some better way to do this instead of manually storing the weights and activations. Since they are already available in the model, I don’t see any point in storing them outside the model.

Any help would be appreciated, thanks!

I implemented excitation backprop and other visual saliency methods in (although they are based on pytorch 0.2 :-/ ). Hope it helps