Extraction and modification of intermediate layer outputs from pre-trained models and layer by layer inference

Hi everyone, I would like to know if it is possible to obtain and modify the outputs of several layers (features extraction), modify them and use them as new inputs for the following layers, continuing then with the inference.

The purpose is to inject faults into some features and check the behaviour of several pre-trained models analyzing their accuracy to some datasets like ImageNet (quantized int8 and fp32 models from pytorch vision repo (Models and pre-trained weights — Torchvision 0.13 documentation).

It occurs to me a possible solution is to make some kind of layer by layer inference if it is possible, but I am not sure if this is the proper way. Any help would be appreciated.


You could register forward hooks on the desired layers and manipulate the output.

Is there any easy example of how to reproduce that?

You could use this code as a template and add your manipulation instead of storing the activation in the dict.

@ptrblck I wonder, if there is also a way to use hooks in the following post that I submitted a few moments ago: Search and modify layer/module outputs by name

I’m not familiar enough with the quantization util. so don’t know how these layers are registered and used.
You might want to move your other post to the Quantization module so that the code owners are aware of it.