How can we calculate gradient of loss of neural network at output with respect to its input. Specifically i want to implement following keras code in pytorch

```
v = np.ones([1,10]) #v is input to network
v_tf = K.variable(v)
loss = K.sum( K.square(v_tf - keras_network.output)) #keras_network is our model
grad = K.gradients(loss,[keras_network.input])[0]
fn = K.function([keras_network.input], [grad])
keras_network_input = np.ones([1,1,32,32]) #for simplicity ones
grads = fn([keras_network_input])
```

and then from these gradients i want to change my input in direction (gradient descent iteration by iteration) such that loss is minimized.