How to use the AOT bundle in pytorch?

Hi, I was trying to use the bundle that is generated using GLOW. In the instructions it mentions that I need to link the executable to my application. How can I link that executable to pytorch file. I wish to run some tests for pytorch execution vs glow AOT bundle execution. https://github.com/pytorch/glow/blob/master/docs/AOT.md

If you want to run it driven via PyTorch, you’d need to still create a C++ wrapper (see here for how to use the bundle that’s generated) that PyTorch has a bridge to execute on.

But to take a step back, it might make sense to just run via torch_glow and without a bundle – not sure if there’s a specific reason you wanted to use a bundle here.

@jfix…the whole idea was to run inference in pytorch and compare it to the AOT bundle that is generated using glow. I assume that that pytorch has been focusing on inference on cpu and multi-core whereas glow is focused more on hardware accelerators. I did have a question if I had to compare execution of both on cuda gpu what would be a good way to get some results?

Sure, but you don’t need to use an AOT bundle to do so. If you go through the torch_glow path, nothing is AOT, it’s done compiled just-in-time.

We don’t have support for CUDA right now directly. Theoretically you could try to target CUDA via our LLVM backend, but it’s been much more targeted toward CPUs, and I don’t know of anyone who has tried this. We also have an OpenCL backend, but it’s not a big focus of ours.

In open source we have focused more on inference accelerators backends like NNPI and Habana. There are also open-source users that contribute PRs usually targeted for LLVM-based backends, but those backends are all kept private by those contributors.