Legality of adding new layers to built-in faster rcnn with forward hooks?

I am looking for a way to add new conv2d and relu layers in between the backbone and RPN of Faster R-CNN. I have already created a custom nn.module for new layers. Moreover, I discovered the existence of forward hooks that is called after each forward pass. So I thought that I can get the output activations of the last layer of the backbone of resnet and feed these activations to my new module and get activations from it so that I can append these new activations to input of RPN.

Is this a valid way to modify such kinds of networks?

If it is a valid way, does my new nn.module get gradient updates as usual?

Thank in advance for any help.

Nock Nock. Is there anyone out there to have an idea about this?

Forward hooks would give you the activation output of a specific layer and you could feed them to any additional layers without breaking the computation graph. Based on your description I don’t know if this is your use case or if you want to manipulate the model internally and reuse other (later) layers.
In this case, it might be easier to check the source code of the model and reuse it to create a custom model.