Problem about calc high order gradients on nn.Module

I can successfully calculate high order grad on a simple formula, but fail in nn.Module

x = autograd.Variable(torch.randn(2, 2), requires_grad=True)

y = x ** 2
x_grad = autograd.grad(outputs=y, inputs=x,
                       grad_outputs=torch.ones(y.size()),
                       create_graph=True, only_inputs=True)[0]
z = x_grad ** 2
autograd.grad(outputs=z, inputs=[x],
              grad_outputs=torch.ones(z.size()),
              only_inputs=False)

The above code will be correct

net = nn.Linear(2, 2)

x = autograd.Variable(torch.randn(2, 2), requires_grad=True)

y = net(x)
x_grad = autograd.grad(outputs=y, inputs=x,
                       grad_outputs=torch.ones(y.size()),
                       create_graph=True, only_inputs=True)[0]
z = x_grad ** 2
autograd.grad(outputs=z, inputs=[x],
              grad_outputs=torch.ones(z.size()),
              only_inputs=False)

This will raise error

RuntimeErrorTraceback (most recent call last)
<ipython-input-191-3b4da0254135> in <module>()
     10 autograd.grad(outputs=z, inputs=[x],
     11               grad_outputs=torch.ones(z.size()),
---> 12               only_inputs=False)

/home/users/gang.cao/env/lib/python2.7/site-packages/torch/autograd/__init__.pyc in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs)
    144     return Variable._execution_engine.run_backward(
    145         outputs, grad_outputs, retain_graph,
--> 146         inputs, only_inputs)
    147 
    148 

RuntimeError: there are no graph nodes that require computing gradients