Correct me if this isn’t a JIT problem but I would like to see the resulting output after each node in a graph (traced graph specifically). Is there any way to do this easily? My current approach (which would only look at layers) is to modify the model to save each layer’s output and return that in the forward function but this isn’t ideal as I would like to look at each node and ideally not modify the model.
There is no way to do this today, we’ve had this issue open for a while which describes what you want. Until we implement something like that storing the results manually is the only way to go. A cleaner way than adding a giant return to your
forward may be to store intermediate results as attributes on the module, something like
class X(nn.Module): layer1: torch.Tensor layer2: torch.Tensor def __init__(self): self.layer1 = None self.layer2 = None self.fc1 = nn.Linear(10, 10) def forward(self, x): self.layer1 = self.fc1(x) self.layer2 = self.fc1(self.layer1) return self.layer2 torch.jit.script(X())
torchvision has a similar problem, they use this class as a workaround.