Hello!
I am trying to convert quantized model to Caffe2. I know that it is needed to use TorchScript tracing before ONNX exporting. I am trying to convert RCNN model the following way: 1) perform quantization, 2) trace quantized backbone to torchscript, 3) swap original backbone with quantized one (other parts of the network as they were), 4) patch and export network with ONNX
It is possible to convert original (non-quantized) network to Caffe2 without any errors, but when I do swapping of backbone and export ONNX I see the following:
Traceback (most recent call last):
File "./tools/torchscript_converter.py", line 158, in <module>
caffe2_model = export_caffe2_model(cfg, orig_model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 148, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 67, in export_onnx_model
export_params=True,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 525, in _export
fixed_batch_size=fixed_batch_size)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 364, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 317, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 277, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 359, in forward
self._force_outplace,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 345, in wrapper
outs.append(self.inner(*trace_inputs))
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 560, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 546, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/root/some_detectron2/detectron2/export/caffe2_modeling.py", line 326, in forward
features = self._wrapped_model.backbone(images.tensor)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 560, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 536, in _slow_forward
return self.forward(*input, **kwargs)
RuntimeError: Tried to trace <__torch__.densepose.modeling.layers.bifpn.BiFPN object at 0x5570e7633550> but it is not part of the active trace. Modules that are called during a trace must be registered as
submodules of the thing being traced.
Is this because I’ve mixed TracedModule with eager mode modules?