Second derivative with respect to input

Hi, I’m trying to implement the second derivative with respect to input. However, the returned error confused me. The following are the sample code.

import torch
import torch.nn as nn

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(2, 1)

    def forward(self, x):
        x = self.fc1(x)
        return x

dat = torch.tensor([[1.2,3.4], [5.6,1.6]], requires_grad=True)
model = Net()
out = model(dat)
tmp,  = torch.autograd.grad(outputs = out, inputs = dat, grad_outputs = torch.ones_like(out), retain_graph=True, create_graph=True)
second = torch.autograd.grad(outputs = tmp, inputs = dat, grad_outputs =torch.ones_like(tmp), retain_graph=True, create_graph=True)

The error message are the following

--------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-66-8ad432f34bca> in <module>
----> 1 torch.autograd.grad(outputs = tmp, inputs = dat, grad_outputs =torch.ones_like(tmp), retain_graph=True)

~/.local/lib/python3.8/site-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
    221         retain_graph = create_graph
    222 
--> 223     return Variable._execution_engine.run_backward(
    224         outputs, grad_outputs_, retain_graph, create_graph,
    225         inputs, allow_unused, accumulate_grad=False)

RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

I tried “Set allow_unused=True if this is the desired behavior.” But it returns a empty tensor

Can anyone help me to fix this?

1 Like