Jacobian Based Saliency Map implementation

I am trying to implement https://arxiv.org/pdf/1511.07528.pdf . That is Jacobian based saliency map attack for generation of adversarial examples. Is there any reference implementation. If not can somone tell how to compute derivative of logits wrt to inputs.Thanks in advance

Hi,

For MNIST data, where input image is 28x28 and you have 10 classes, you may use this code:

def find_Jacobian(my_input, my_nn_model):

inp = my_input.detach().clone()
Jn = torch.zeros((784, 10))  # loop will fill in Jacobian
Jn = Jn.float()

inp.requires_grad_()

preds = my_nn_model(inp)

for i in range(10):
    grd = torch.zeros((1, 10))  # same shape as preds
    grd[0, i] = 1  # column of Jacobian to compute
    preds.backward(gradient=grd, retain_graph=True)
    Jn[:, i] = inp.grad.view(784).float()  # fill in one column of Jacobian
    inp.grad.zero_()  # .backward() accumulates gradients, so reset to zero

return Jn

A simple implementation in torch 1.8.1 JSMA