Is there a piece of code which splits backward process layer by layer?

Hi!
I want to insert some codes between layers in backward process of any model (in this case, I use ResNet-50). I considered backward hook, but it is not what I want. So I have searched /torch/csrc/autograd, /torch/c10, ./torch/aten, but I cannot find the piece of code which splits backward process layer by layer (like for statement). I also followed codes by GDB after building source by DEBUG mode, but it’s difficult to me because of python codes.
I just found out that autograd is executed by node by node (I’m not sure). Then, does this node of graph represents layer?

Is there anybody who knows where is the piece of code which splits backward process layere by layer, or the fact that there is no such piece of code?

p.s. I am mainly examining /torch/csrc/autograd/engine.cpp file (especially Engine::execute and Engine::thread_main in this file). Is it the right path?

Hi,

What do you mean by “layer by layer”? Do you mean the layers from nn ? Or each operation that is performed?

An easy way to do this could be to add Tensor hooks to all the points where you want to stop and you can fire pdb debug in the hook. Or do what you want to do.