Gradient of Loss of neural network with respect to input

How can we calculate gradient of loss of neural network at output with respect to its input. Specifically i want to implement following keras code in pytorch

    v = np.ones([1,10]) #v is input to network
    v_tf = K.variable(v)
    loss = K.sum( K.square(v_tf - keras_network.output)) #keras_network is our model
    grad = K.gradients(loss,[keras_network.input])[0]
    fn = K.function([keras_network.input], [grad])
    keras_network_input = np.ones([1,1,32,32]) #for simplicity ones
    grads = fn([keras_network_input])

and then from these gradients i want to change my input in direction (gradient descent iteration by iteration) such that loss is minimized.

1 Like

http://pytorch.org/docs/master/autograd.html?highlight=grad#torch.autograd.grad

2 Likes

Thanks. Can you provide me equivalent code of above keras code in pytorch. I am totally new to pytorch. Thanks

Eh, I don’t know much about keras…

input = torch.autograd.Variable(torch.from_numpy(np.ones([1,1,32,32]))
output = model(input)
v = torch.autograd.Variable(torch.from_numpy(np.ones([1,10])))
loss = ((v - output) ** 2).sum()
grad = torch.autograd.grad(loss, input)

every time i run this code it gives me different gradients (grads). Should not grad be same for every run?

Maybe your model has dropout or batch Norm?