Derivative of selected hypothesis of neural network

Hello guys! I hope you are all well.

I have a problem which I try (not successfully) to solve :slight_smile:… (It’s about my master’s thesis).
Anyway, I coded a fully connected feed forward neural network for supervised regression with PyTorch (4 layers, 32 neurons per layer). It is one dimensional (1 input dim., 1 output dim.), i.e. the selected hypothesis of the NN can be plotted in a x-y plane. So now imagine I plot the hypothesis in x-y coordinates, i.e. I use a created data set X and apply Function = model(X), then plot(Function). I now would like to have the information about the slope of that Function. I know, in PyTorch it is a Tensor containing data points in the size of X. Nevertheless, is there any way to get the information about the derivative/slope of the Function? I don’t want the slope w.r.t. the weights, I need the slope of x w.r.t. y!!

Please help!! :slight_smile:

if you do x.requires_grad_() you can compute the derivative of y w.r.t. x using torch.autograd.grad. The derivative of x w.r.t. y is the inverse (1/x) by the inverse function theorem.
Of course, for 1d, approximation by finitite differences also works well.

Best regards


Hi Tom,

thanks for your response. I would like to try autograd, but I don’t know where to put it. Currently I put it after training my NN like that:

with torch.no_grad():
    Predict = model(XTest)
    Function = model(X)

XGrad = Variable(X, requires_grad=True)
FGrad= Variable(Function, requires_grad=True)
Grad = torch.autograd.grad(outputs=FGrad, inputs=X,
                           grad_outputs=torch.ones_like(FGrad), allow_unused=True)

But, if I print Grad, it is basically (None,).

Could you be a little more precise please how to implement it :)?