I want to integrate Glow compiler with a Neural network(say X) and run the optimized code which is output of Glow on a specific backend. Can someone please suggest me how this could be achieved like where the integration could be made with the Neural network(X) and Glow Compiler.
Hi @Kittu28 – it would help if you could provide more information, such as the format of your current model (PyTorch? ONNX? Caffe2? TF?), and what your target HW is (x86 or ARM CPU? GPU? Specialized HW).
its Caffe2 and the target HW is DSP.
If you know anything on this or any resources on this ,please help me and Thank you very much for the reply.
For Caffe2 loading, we have a Caffe2ModelLoader that will load a model consisting of a pair of
predict_net.pb. If you’re targeting some custom DSP then you will most likely need to build your own backend to compile down to it. Assuming there is some CPU driving execution you may be able to create an LLVM-based backend. It’s hard to say exactly what you need to do based on the info you’ve provided, though. This doc is a good place to start on loading a model and compiling a binary for it, but you’d still need to build the backend that supports your HW.