Try to get the output of a layer by register_forward_pre_hook

I’m work on KeypointRCNN, and I try to get the outputs of the FPN(feature maps), so I use “register_forward_pre_hook” like this:

for name, m in model.named_modules(): **
** if name ==‘rpn.head’:

** m.register_forward_pre_hook(get_layer)**

Where “get_layer” is a function saving the rpn.head’s input,which is the output of FPN(I guess):

def get_layer(module, x): **
** all_features_map.append(x)

Thanks to the poor memory of my 1050Ti GPU, I can only run one image each time, so I try this:

for run_images , target in dataset:
** with torch.no_grad():**
** model.eval()**
** model.to(‘cuda’)**
** run_images = torch.unsqueeze(run_images, 0).to(torch.float32).to(device)**
** predictions = model(run_images) **
** …**

However! I find that the dim[0] of outputs I get from “get_layer”, are linearly increasing.
The first image,I get 1280,which is 256*5, well, it’s nice, but the next time I get 2560, and 3840 and so forth.
I guess it’s due to for some reason, when I use “for”, the forward() function will somehow invoke more times.
So I try to wrap this whole operation mentioned before as a function “get_feat”, I find that by this way, the dim[0] of features are always 1280, which is nice I guess.

for i in range(40):
** f,p,g = get_feat(i)**
** feat_set_save.append(f)**
** pos_set_save.append(p)**
** gt_set_save.append(g)**
** torch.cuda.empty_cache()**

However! For some unknown reasons, this way may require more memory, I meet “CUDA out of memory” every time when I run about 25 images this function. Actually I think it’s not my GPU’s problem,for the reason that when I didn’t warp this operation as a function, I have already run 1000 images(Although the features are wrong).
Can u give me some advice? Thank u so much!!!

I cannot reproduce the increasing feature shape using a simple code snippet:

model = models.detection.keypointrcnn_resnet50_fpn()
model.eval()
x = [torch.randn(3, 224, 224), torch.randn(3, 256, 256)]

all_features = []
def get_layer(module, x):
    all_features.append(x)

for name, m in model.named_modules():
    if name == 'rpn.head':
        model.register_forward_pre_hook(get_layer)

out = model(x)
print([f[0][0].shape for f in all_features])

So unsure what’s causing the issue. Your current code is a bit hard to read and you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.

Note that you are not detaching the activations before storing them in the list, which would thus store the entire computation graph and all intermediate activations with it unless you either detach() the tensor or wrap the forward pass into a with torch.no_grad() guard (which will disallow the gradient computation).

Thank u for your help.
The reason you didn’t meet the problem I mentioned may be that u didn’t run this model in more than one batchs.
I test this morning, if I use “for image, target in dataset:” or "for i in range(100):"I will get exactly same features, like this:


So I just simply choose one of those same features.And I carefully check my code.
To some extent, I have solved my problem , but I have no idea how this happens, can u explain to me if u are interested?Thanks!

Thank u for your patience!
If u are interested, here is my demonstration:

It’s expected that all_features_map increases in its size, since you are appending to the list, isn’t it?
I’m not sure I understand the issue correctly, so feel free to post an executable code snippet and explain a bit more what the issue is. :wink:

OK! I made a stupid mistake.
I just figured out what had happened.
A simple test is below:

import torchvision
import torch

model = torchvision.models.detection.keypointrcnn_resnet50_fpn()
model.eval()

all_features = []

class test_dataset(object):
    def __init__(self):
        pass
    def __getitem__(self, idx):
        return torch.rand((3,224,224)), torch.rand((1))
    def __len__(self):
        return 5

dataset = test_dataset()

for image,target in dataset:
    image = torch.unsqueeze(image,0) # use one image due to the limit of memeory in pratical use
    all_features = []
    def get_layer(module, x):
        all_features.append(x)
    model.rpn.head.register_forward_pre_hook(get_layer) # repeatly register!
    with torch.no_grad():
        out = model(image)    
    print(len(all_features))

And it will get:
1 2 3 4 5 6…
It’s because:

model.rpn.head.register_forward_pre_hook(get_layer) # repeatly register!

This function is used repeatedly, so the “forward()” function will get another hook every time, resulting in the linearly increasing of the features.
Thank u for your coding advice! :grinning: