Adding Native Pruning Support

Hi All,

I want to add support for pruning whereby we can add hints to a layer, say some flag called ‘pruned’, so that instead of doing the convolution, we just format the output and then pass a zero tensor. This is to basically skip doing unnecessary ops on pruned weights.

Is there some type of architecture document that covers call graphs or how python invokes the native C++ functions for different architectures. I am interested in understanding what are the entry points for the different calls. For example, if I want to edit the C++ code for the convolution operation, which is the entry point? There are a bunch of Conv implementations but without guidance it is hard to keep track.

Thank you!

A PyTorchy way of doing this would be to work on a module basis, e.g. using the brand new FX facility or a JIT representation and mutate the module / JIT graph.

That said, the outmost C++ entry points are convXd found via native_functions.yaml. I once blogged about some details of how Python maps to C++, it’s old, but save the location of derivatives it’s still mostly accurate.

Best regards

Thomas

Thank you for your response. I’ve looked at that entry point, however, for some reason I am not hitting the C++ methods I expect when running the model on my build. So I was wondering if there was some detailed doc on Pytorch architecture. Will take a look at our blog. Cheers!