Get intermediate layer outputs from traced graph

Hi

I am trying to visualize intermediate layer outputs generate by one input image during inference of a PyTorch model. Preferably I would like to do this from a traced graph, for example one from the torchvision modelzoo.

I.e. if I have a model file created like this

import torch
import torchvision

org_model = torchvision.models.resnet18(pretrained=True)
traced_net = torch.jit.trace(org_model, torch.rand(1, 3, 224, 224))

torch.jit.save(traced_net, "resnet.pth") 

Then I want to be able to load that model and output the activations of for example “layer1”

traced_model_loaded = torch.jit.load("resnet.pth")

input_ = torch.rand(1, 3, 224, 224)
layer1_act = traced_model_loaded.layer1(input_)

Is this possible? If not, can I in some way modify the original PyTorch model so that an arbitrary number of layer activations becomes accessible? Using forward hooks does not seem to be supported.

Thanks!

I think you can always output the activations using the org_model, or debug in python as you want, and if you think it’s good, then do the tracing and serialization.

If you want to see the intermediates in the traced model, you can still modify the original model and add print stmts etc to debug it.

Yes, it would however require me to have to original model. What I wanted to do was to create a generic way to visualize the layer activations of an arbitrary layer (or channel) at inference. However, have realized that it is not possible, that you need the original model to be able to do this. The easiest way of doing this when having the model was to register forward hooks, which then outputs the resulting activations during a forward pass. So I got the functionality I wanted, but not with the traced graph.

Do you have solution finally ?

Also. need this !! any solution?

Something like this would work: create a wrapper that returns intermediate outputs, and trace the wrapper instead.

class TracingWithLayerOutputs(nn.Module):
    def __init__(self, model, layer_name):
        ...

    def forward(self, inputs):
        submodule = get_submodule_using_name(model, self.layer_name)
        extra_outputs = []

        class Hook:
            def __call__(self, module: nn.Module, inputs: Any, outputs: Any) -> Any:
                extra_outputs.extend([x.cpu() for x in _maybe_flatten(outputs)])
                return outputs

        handle = submodule.register_forward_hook(Hook())
        outputs = self.model(inputs)
        handle.remove()
        return outputs + extra_outputs

trace = torch.jit.trace(TracingWithLayerOutputs(model, "backbone.conv1"))
outputs, conv1_outputs = trace(inputs)

flop count: capture output tensors from all layers so that unused layers are correctly counted. by ppwwyyxx · Pull Request #96 · facebookresearch/fvcore · GitHub has a working example of a similar technique (but to solve a different problem)