Gradient of Loss of neural network with respect to input

(Fswerty) #1

How can we calculate gradient of loss of neural network at output with respect to its input. Specifically i want to implement following keras code in pytorch

    v = np.ones([1,10]) #v is input to network
    v_tf = K.variable(v)
    loss = K.sum( K.square(v_tf - keras_network.output)) #keras_network is our model
    grad = K.gradients(loss,[keras_network.input])[0]
    fn = K.function([keras_network.input], [grad])
    keras_network_input = np.ones([1,1,32,32]) #for simplicity ones
    grads = fn([keras_network_input])

and then from these gradients i want to change my input in direction (gradient descent iteration by iteration) such that loss is minimized.

(Ruotian(RT) Luo) #2

(Fswerty) #3

Thanks. Can you provide me equivalent code of above keras code in pytorch. I am totally new to pytorch. Thanks

(Ruotian(RT) Luo) #4

Eh, I don’t know much about keras…

input = torch.autograd.Variable(torch.from_numpy(np.ones([1,1,32,32]))
output = model(input)
v = torch.autograd.Variable(torch.from_numpy(np.ones([1,10])))
loss = ((v - output) ** 2).sum()
grad = torch.autograd.grad(loss, input)

(Fswerty) #5

every time i run this code it gives me different gradients (grads). Should not grad be same for every run?

(Federico Pala) #6

Maybe your model has dropout or batch Norm?