Hello everyone!
Once again I’ve run into something I don’t understand, need your help.
I’ve compiled speed_benchmark_torch
for mobile to test my TorchScript RCNN model.
The model is from detectron2. I’ve slightly varied caffe2 conversion of RCNN to use torch.jit.trace
instead torch.onnx.export
https://github.com/facebookresearch/detectron2/blob/master/tools/deploy/caffe2_converter.py
That gives a valid model to run on PC but on Mobile there is an issue.
Unknown builtin op: _caffe2::GenerateProposals.
Could not find any similar ops to _caffe2::GenerateProposals. This op may not exist or may not be currently supported in TorchScript.
...
Serialized File "code/__torch__/detectron2/export/c10.py", line 22
scores = torch.detach(_6)
bbox_deltas = torch.detach(_7)
_16, _17 = ops._caffe2.GenerateProposals(scores, bbox_deltas, im_info, _4, 0.25, 1000, 100, 0.69999999999999996, 0., True, -180, 180, 1., False, None)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
scores0 = torch.detach(_8)
bbox_deltas0 = torch.detach(_9)
The question is possible to un TorchScript model with caffe2 ops or we should use old speed_benchmark
for caffe2 models?
Why there are caffe2 related options in ./speed_benchmark_torch --help
?