Out of Tree Backend Dispatcher - Registering Structured Ops

After following the open registration example for attempting to create an out of tree backend that replicates CUDA but remotely via an API call instead, I’ve managed to perform registration for the AbsKernel using the REGISTER_DISPATCH with the associated DispatchStub, and then used

at::Tensor &abs_out(const at::Tensor &self, at::Tensor &out) {
  return at::native::abs_out(self, out);

in order to register that operation, but for the Activation function kernels, I cannot seem to replicate the same, for instance trying to use:

at::Tensor &elu_out(const at::Tensor &self, const at::Scalar &alpha,
                    const at::Scalar &scale, const at::Scalar &input_scale,
                    at::Tensor &out) {
  return at::native::elu_out(self, alpha, scale, input_scale, out);

fails since at::native::elu_out doesn’t exist, any help would be much appreciated!