In one .py file, I save a torch.jit model:
torch.jit.save(torch.jit.trace(model_quantized, example_inputs=input), “model.pt”)
in another .py file, I load and run the model:
model_quantized = torch.jit.load(“model.pt”)
model_quantized(input)
graph = model_quantized.graph
how can I modify the graph to get intermediate output?