"outputs" in python_engine.cpp

I have a simple code from tutorials where I am trying to understand what happens when loss.backward() is called and I am using GDB for that. My goal is to find where grads of tensors change in the C code. However:

Line 169 of python_engine.cpp says:

outputs = engine.execute(roots, grads, keep_graph, create_graph, output_edges);

but printing outputs in gdb always return this:

$8 = {
  <std::_Vector_base<torch::autograd::Variable, std::allocator<torch::autograd::Variable> >> = {
    _M_impl = {
      <std::allocator<torch::autograd::Variable>> = {
        <__gnu_cxx::new_allocator<torch::autograd::Variable>> = {<No data fields>}, <No data fields>}, 
      members of std::_Vector_base<torch::autograd::Variable, std::allocator<torch::autograd::Variable> >::_Vector_impl: 
      _M_start = 0x0, 
      _M_finish = 0x0, 
      _M_end_of_storage = 0x0
    }
  }, <No data fields>}

any ideas why?

outputs is empty in loss.backward() because that expression doesn’t return the gradients. The gradients are accumulated into the .grad attribute of variables. (Look in accumulate_grad.cpp instead).

outputs will contain Tensors if you use torch.autograd.grad(loss, <inputs>).

Awesome thanks.

Any chance we can also see where the interior nodes’ derivatives (those with grad_fn() == 1 are computed (the ones that we use to calculate these gradients to accumulate in leaves)? Is it inside libtorch.so?

They are defined in the generated file torch/csrc/autograd/generated/Functions.cpp. That file is generated from tools/autograd/derivatives.yaml using the Python files in the tools/autograd directory.