Hi
I am trying to look at the gradients generated at the non-leaf/non-root nodes using an example in tutorials. The grad_fn
function for one of the nodes in the created graph is MmBackward
.
I searched Functions.cpp
in generated folder and found this function:
variable_list MmBackward::apply(variable_list&& grads) {
IndexRangeGenerator gen;
auto self_ix = gen.range(1);
auto mat2_ix = gen.range(1);
variable_list grad_inputs(gen.size());
auto& grad = grads[0];
auto self = self_.unpack();
auto mat2 = mat2_.unpack();
if (should_compute_output({ mat2_ix })) {
auto grad_result = mm_mat2_backward(grad, self, mat2_sizes, mat2.strides(), 1);
copy_range(grad_inputs, mat2_ix, grad_result);
}
if (should_compute_output({ self_ix })) {
auto grad_result = mm_mat1_backward(grad, mat2, self, 1);
copy_range(grad_inputs, self_ix, grad_result);
}
return grad_inputs;
}
Using gdb I tried to look at grad_inputs
. I would like to ask two questions:
- Why is it of size 2?
- Why does it seem empty?
(gdb) p grad_inputs
$2 = {
<std::_Vector_base<torch::autograd::Variable, std::allocator<torch::autograd::Variable> >> = {
_M_impl = {
<std::allocator<torch::autograd::Variable>> = {
<__gnu_cxx::new_allocator<torch::autograd::Variable>> = {<No data fields>}, <No data fields>},
members of std::_Vector_base<torch::autograd::Variable, std::allocator<torch::autograd::Variable> >::_Vector_impl:
_M_start = 0x7fdf8c000b10,
_M_finish = 0x7fdf8c000b20,
_M_end_of_storage = 0x7fdf8c000b20
}
}, <No data fields>}
(gdb) p *grad_inputs._M_impl._M_start
$8 = {
<at::Tensor> = {
impl_ = {
target_ = 0x7fdfa91de340 <c10::UndefinedTensorImpl::_singleton>
}
}, <No data fields>}