I need a means of connecting model.modules and exported jit graph.
in torch 1.3 I used to be able to do
import torchvision import torch from torch.onnx import utils model = torchvision.models.resnet18() tensor = torch.randn([1,3,224,224]) trace, out = torch.jit.get_trace_graph(model, tensor) graph = trace.graph() graph = utils._optimize_graph(graph, operator_export_type=torch._C._onnx.OperatorExportTypes.ONNX) for i, node in enumerate(graph.nodes()): scopename = ".".join([x.split('[')[-1] for x in [s for s in node.scopeName().split("]")] if len(x.split('[')) > 1]) print("node.scopeName() <%s> named_module:"%scopename, dict(model.named_modules())[scopename])
Which gives me a nice way to be able to inspect the latents with _hooks at the same time that I know connectivity, hence can see properties like receptive filed and so on.
Since 1.4 scopeName() is empty. Looks like the pointer gets wiped out instead of stored in jit.trace
I just tried in 1.8 and it is still empty, I do realize that code should change to something like
from torch.onnx import TrainingMode from torch.onnx import utils graph, dic, out = utils._model_to_graph(model, tensor, training=TrainingMode.TRAINING, _retain_param_name=True) for i, node in enumerate(graph.nodes()): print(node.scopeName())
I know, I can “retain_names” and then i can grab the common prefix of the inputs that is not numeric, which will give me the scopeName of nodes with inputs. Conv yes, but not other nodes like ReLUs…
I also know that instead of
_model_to_graph() I could do
trace = torch.jit.trace(model, tensor) graph = trace.inlined_graph
which keeps the dirtier version out of which I can get the scopename, anyway, hacks.
- Is there a reason scopeName() gets cleared by jit?
- Anyone know if there is an alternative to scopeName such that I can connect a graph to the trace of the graph, node by node?
Or do I have to fix the code, probably around <pytorch/torch/csrc/jit/ir/ir.h>