# How to obtain the derivative of the network output with respect to the input？

I want to get the derivative of the network output with respect to the input to see the state of the prediction model. My initial idea was to use the hook function to get the gradient of each layer and then multiply it to get the result. But I don’t know exactly how to do it.

Hi,
You could simply use torch.autograd functionality. There could be a bunch of ways to do the same thing.

I recommend reading up on the official documentation to get hold of the autograd engine. Feel free to post any queries here.

1 Like

Thanks, but after I used the autograd function, it only supports derivatives of scalars. I used weighting the output tensor, but it doesn’t seem to work.I would like to ask if there is another way.

``````for step in range(200):
pred_y=model(input_x)
print('dy/dx: ',dy_dx)
loss=loss_func(pred_y,input_y)
loss.backward()
optimizer.step()
``````
``````Traceback (most recent call last):
File "C:\Users\Slive\PycharmProjects\pythonProject\main.py", line 41, in <module>
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

``````

Hi, yes that behaviour is expected in case `loss` tensor (the one you call backward on) isn’t a scalar (a tensor containing a single element). You could choose to apply some reduction on it like -

``````loss.sum().backward()
``````

Or,
As is specified in the docs, an additional `gradient` argument needs to be specified in the backward call on a multidimensional tensor.

`gradient` is a tensor of matching type and location, and contains the gradient of the differentiated function w.r.t. itself. Mathematically, it’s `dLoss/dLoss`. So, the following should work -

``````loss.backward(torch.ones_like(loss))
``````
1 Like

Hi @Inz-Zos,

If you want an explicit function for your derivative, you can use the functorch library (which comes packaged with the latest PyTorch install). More info can be found in the docs: Per-sample-gradients — functorch 1.13 documentation

An example,

``````from functorch import make_functional, vmap, grad jacrev

model = Model(*args, **kwargs) #model instance

fnet, params = make_functional(model) #functorch needs a functionalized model

Thank you, I will continue to learn more about your method. 