Using glow with model made using PyTorch Python API

(Karanraj Chauhan) #1

I have a model defined and trained using the Python API of PyTorch. I want to optimize it for inference. How exactly do I proceed?

I can’t seem to find any documentation or examples (the ones on the glow repo are cpp only).

(Jordan Fix) #2

I want to optimize it for inference.

I’m not sure what this means. Do you have a specific Glow backend you want to run on?

I can’t seem to find any documentation or examples (the ones on the glow repo are cpp only).

To use Glow you need to use C++, at least to build the project and use it out of the box. This may or may not be possible depending on your model – for example we have an image-classifier driver for running on any of our backends, so as long as you can get your model in ONNX or Caffe2 protobuf form we can load it there.

(Karanraj Chauhan) #3

Apologies if I’m being vague. I want to reduce the inference time so I can process more frames per second (it’s an image processing project). I will be running this on CPU.

I have exported the model to ONNX using torch.onnx.export. How should I load it?

Thanks for your help. Much appreciated.

(Jordan Fix) #4

You can follow the directions on our docs – for example for running image classification models you may be able to use our image-classifier. You can find more info here.