I got an error when I use my own c++ operator:
UserWarning: pme::pme_reciprocal: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:62.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Here is my def and impl:
TORCH_LIBRARY(pme, m) {
m.def("pme_direct(Tensor positions, Tensor charges, Tensor neighbors, Tensor deltas, Tensor distances, Tensor exclusions, Scalar alpha, Scalar coulomb) -> Tensor");
m.def("pme_reciprocal(Tensor positions, Tensor charges, Tensor box_vectors, Scalar gridx, Scalar gridy, Scalar gridz, Scalar order, Scalar alpha, Scalar coulomb, Tensor xmoduli, Tensor ymoduli, Tensor zmoduli) -> Tensor");
}
TORCH_LIBRARY_IMPL(pme, CPU, m) {
m.impl("pme_direct", pme_direct_cpu);
m.impl("pme_reciprocal", pme_reciprocal_cpu);
}
then I modify it as
TORCH_LIBRARY_IMPL(pme, CPU, m) {
m.impl("pme_direct", torch::dispatch(c10::DispatchKey::AutogradCPU, pme_direct_cpu));
m.impl("pme_reciprocal", torch::dispatch(c10::DispatchKey::AutogradCPU, pme_reciprocal_cpu));
}
It crushed immediately.
How do I fix this, and where can I find a reference?
Thanks!!