Hi, I’m trying to implement the following paper in pytorch.
https://arxiv.org/abs/1608.00507
It’s one way to construct saliency map.
For a certain Conv2d layer, we need to truncate the weight W to W+ which only positive weights are remained. Then, they will do a Conv2d_back according to this new weight(together with normalized output). However, in pytorch, the operator torch._C._functions.Conv2d is not allowed to use. Is there any suggestion on how to overcome this and implement this paper ?
Thanks a lot and happy new year !
1 Like
chenyuntc
(Yun Chen)
January 2, 2018, 1:06pm
2
seems like guided backpropogation
"""
import torch
from torch.nn import ReLU
from misc_functions import (get_params,
convert_to_grayscale,
save_gradient_images,
get_positive_negative_saliency)
class GuidedBackprop():
"""
Produces gradients generated with guided back propagation from the given image
"""
def __init__(self, model, processed_im, target_class):
self.model = model
self.input_image = processed_im
self.target_class = target_class
self.gradients = None
# Put model in evaluation mode
self.model.eval()
1 Like
For solving the inconsistent forward and backward functions in excitation backprop, I implemented two functions with pure conv functions by
backend = type2backend[inp.type()]
f = getattr(backend, 'SpatialConvolutionMM_updateOutput')
f(backend.library_state, inp, output, weight, bias, columns, ones,
kH, kW, ctx.stride[0], ctx.stride[1], ctx.padding[0], ctx.padding[1])
Note that this is a temporal solution for pytorch 0.2 version. The full implementation can be found in https://github.com/yulongwang12/visual-attribution . Hope it helps