torch::jit::script::Module with hook function?

Is there any way to compute the gradient of activation map at a certain later with respect to the one of the output components with torch::jit::script::Module?
it seems the hook function only supports the nn::Module.
Could you provide a simple example?

Thank you very much in advance.

Hooks aren’t supported right now but we are working on it and it will probably be done in the next couple weeks

Thank you very much for your reply @driazati !
I am looking forward to that feature but however, I am catching a due with in one week, I wonder is there any alternate way to compute the gradient of hidden activation map at certain layer with jit::script::Module?

Thank you very much for your attention to this matter.

We don’t have any support for hooks so anything you do is going to be pretty hacky; that said, you might be able to get something working by adding a custom operator to TorchScript that just calls a C++ autograd::Function that does nothing except let’s you grab the gradients in the backward pass.

There’s an example of a custom op used in this way in torchvision, see https://github.com/pytorch/vision/blob/master/torchvision/csrc/empty_tensor_op.h and https://github.com/pytorch/vision/blob/43e94b39bcdda519c093ca11d99dfa2568aa7258/torchvision/csrc/vision.cpp#L51

1 Like

Hello @driazati, does current LibTorch1.5 supports hook in the jit model?
If so, could you please give an example of using hook function?
Thank you very much for your attention to this matter.

Hi @driazati, does current LibTorch1.7 supports hook in the jit model?
I want to check whether the output of each layer is correct, and I found a forward_hook function play an important role in it.

1.7 does not but 1.8 does!

Hello @SplitInfinity, do you have any example for jit::script::Module using hook function?
I can’t found about hook function in LibTorch v1.10.2 (except at::Tensor::register_hook)

Thanks in advance.

I tried to save the outputs of hidden layer during forward, and then using autograd to calc the hidden layer’s gradients.
Then it was good, the results are same as backward hooks do. But only can readout and calc the gradients, not modifying them.
For my side, JIT is for inferring the model rather than training. Inferring stage needs some gradients from the hidden layer, and in this case, backward hooks is not necessary. jit autograd