Directional derivative

Is it possible to compute directional derivatives using Pytorch?

Here are some examples of the math: http://tutorial.math.lamar.edu/Classes/CalcIII/DirectionalDeriv.aspx

If I remember correctly, directional derivatives are Jacobian vector products.
So you can compute them with the trick shown in this gist.

1 Like

@albanD In order to use this gist I seem to have to set create_graph=True. Is this correct? (Else I don’t see how the intermediate gradients can come out with requires_grad=True.) Thanks.

Yes, if you need to backprop through this step, you will need to add create_graph=True in this gist.

@albanD thanks. Regardless of back prop through the output, this wouldn’t work at all for me without that parameter set — is this to be expected?

I got an error saying thing to be differentiated didn’t have required grad true. And when I checked the output of the first call to autograd.grad this was indeed the case.

For clarity: I took the gist and put in my own parameters and it didn’t work. I checked the intermediate value and it had requires grad false

What is the function you tried to use it with? Can you share a code sample?
What most likely happens is that your function is not differentiable enough?

Here is a minimal example that raises an error for me. My pytorch version is 1.3.1.

import torch

a = torch.tensor([1., 2., 3.], requires_grad=True)
b = torch.tensor([5., 5., 5.], requires_grad=True)

c = torch.dot(a, b)

projection = torch.ones_like(c, requires_grad=True)
intermediate = torch.autograd.grad(
    c, 
    a, 
    projection
)

d = torch.tensor([2., 2., 2.], requires_grad=True)

d_dot_b = torch.autograd.grad(
    intermediate, 
    projection,
    d
)

The error I get is

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-41-81219a2d5296> in <module>
     16     intermediate,
     17     projection,
---> 18     d
     19 )

~/miniconda3/envs/lottery-tickets/lib/python3.6/site-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
    155     return Variable._execution_engine.run_backward(
    156         outputs, grad_outputs, retain_graph, create_graph,
--> 157         inputs, allow_unused)
    158 
    159 

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

On the other hand, the following runs fine for me and gives the expected output

import torch

a = torch.tensor([1., 2., 3.], requires_grad=True)
b = torch.tensor([5., 5., 5.], requires_grad=True)

c = torch.dot(a, b)

projection = torch.ones_like(c, requires_grad=True)
intermediate = torch.autograd.grad(
    c, 
    a, 
    projection
    create_graph=True  # <<<<< the only change
)

d = torch.tensor([2., 2., 2.], requires_grad=True)

d_dot_b = torch.autograd.grad(
    intermediate, 
    projection,
    d
)

This is almost certainly due to my misunderstanding something/lack of knowledge, thanks for your help and attention in clearing it up!

Hi,

Right, the gist was made for an older version of pytorch where the create_graph flag was slightly different. In particular, it was not required to set it if projection was requirering gradients.
So yes you need to set it in the first backward, to be allow to call grad on the result.
You only need to set it in the second backward if you want to be able to call grad on the d_dot_b.

great, thanks so much for clearing that up. all the best.