Torch.fx.wrap for custom cuda kernel

Hello,

I have my own quantization operator written in cuda (according to Custom C++ and CUDA Extensions — PyTorch Tutorials 2.1.1+cu121 documentation) and it works fine. But I need to use ASP (automatic sparsity package) which uses torch.fx, which throws error at my quantization operator

TypeError: fpquantizer_(): incompatible function arguments. The following argument types are supported:
1. (arg0: at::Tensor) → None
Invoked with: Proxy(getattr_1)

So far as I understood from torch.fx manual(hopefully correctly), I need to decorate my operator with torch.fx.wrap.

PYBIND11_MODULE(fpquantize, m)
{
py::object fx_wrap = py::module::import(“torch.fx”).attr(“wrap”);
m.def(“fpquantizer_”, &fp_conversion_, “inplace quantization and storing in a float array”);
m.attr(“fpquantizer_”) = fx_wrap(m.attr(“fpquantizer_”));
}

which seems to do job of decorating operator, but torch.fx.wrap have problem with c++ function itself, it doesn’t have

__code__

attribute and also

isinstance(fpquantize.fpquantizer, types.FunctionType)

is False, so torch.fx.wrap throws an error

ImportError: AssertionError: fn_or_name must be a global function or string name

Is there any way how to handle it? Is my approach correct?

Thanks a lot

The problem was that using just PYBIND11 doesn’t make c++ function torch operator. One need to use procedure from Extending TorchScript with Custom C++ Operators — PyTorch Tutorials 2.1.1+cu121 documentation . i.e.

TORCH_LIBRARY(fpquantize, m)
{
m.def(“fpquantizer_”, &fp_conversion_);
m.impl(“fpquantizer_”, torch::kCUDA, &fp_conversion_);
}