Resize implementation in OpenCL

Hello

I have a specific question about how resize instruction is implemented in the OpenCL backend. A resize instruction is not apparently supported by the OpenCL backend. It is not in the list of isOpSupported.

How is such type of unsupported operation by the OpenCL backend executed? I guess resize is executed in the host environment and the output is exchanged back and force with the device. I’m not sure which part of Glow is responsible for that.

It would be great if someone could give me a pointer to the code I should look into.

Best

It is not in the list of isOpSupported.

It is supported, perhaps you just missed it, it’s listed here:

Thank you!
But IIUC, the resize node has ResizeNearestNodeKind. I could find that node kind in the list of OpenCL backend. Does resize node have ReshapeNodeKind instead?

Oh sorry I misread your original post :slight_smile:

You are correct, Resize is not supported on the OpenCL backend. Depending on what frontend you’re using (PyTorch, Caffe2, ONNX, TFLite, etc.) this will behave differently, since some of these frontends only delegate ops into Glow if they’re supported on the target backend, and so it would automatically execute the Resize in the base framework (e.g. PyTorch) and only delegate to Glow the other piece(s) of the model automatically for you.

However assuming it is delegated/loaded into Glow (e.g. if it’s loaded via ONNX), then it will most likely just throw an error during compilation. You would need to do heterogeneous partitioning in order to partition the Resize node to e.g. the Glow CPU backend to have such execution bounce back and forth between your OpenCL device and the host CPU.

@jfix Information about the heterogeneous partitioning is so helpful. Thank you so much for the explanation!

1 Like