Slope of the Loss function

To get the slope of the Loss function one has to get the derivative. I ve tested the following code with no success.
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time
for i in range(training_iter+1):
optimizer.zero_grad()
x_ = train_x.detach().requires_grad_(True)
output = model(train_x)
loss = -mll(output,train_y)
dy_dx = torch.autograd.grad(loss.mean(), x_,allow_unused=True)
loss.backward()

loss.backward(retain_graph=True) does not resolve the issue

If you want to access the derivative dl/dx (the jacobian)) you can access x_.grad after loss.backward(), as this attribute is set after specifying requires grad = True and backpropping. I did not test your code with the detach() on x_ and would simply write x_ = train_x.requires_grad_(True); as x_ needs to be in the graph anyways.

If you call autograd.grad or backward multiple times, then in the first call, you have to specify retain_graph=True.
In your sample above, that means changing the autograd.grad call to:
dy_dx = torch.autograd.grad(loss.mean(), x_,allow_unused=True, retain_grad=True)

Sorry for late reply as my gmail missed Pytorch notifications.
Thanks for your help.
The following worked
for i in range(training_iter+1):
optimizer.zero_grad()
train_x= train_x.requires_grad_(True)
output = model(train_x)
loss = -mll(output,train_y)
loss.backward(retain_graph=True)
dy_dx = torch.autograd.grad(loss.mean(), train_x,allow_unused=True, retain_graph=True)