How to implement a custom activation function in the PyTorch C++ frontend?

I’m having difficulties finding documentation that describes extending the C++ frontend online, specifically, implementing a custom activation function. Before digging through the source code I wanted to ask here if anyone here has any information on that. Thanks!

This tutorial explains how to write custom C++ classes and call them in the Python frontend as well as in libtorch.
Is this what you are looking for?

I’m basically looking to understand how to implement an activation function in the C++ frontend, and stay in C++, since I’m using PyTorch in another C++ software. AFAIK TorchScript is used for python integration, right?

I’m digging throught the C++ API source code, and I’ve found this

class TORCH_API ELUImpl : public torch::nn::Cloneable<ELUImpl> {
 public:
  explicit ELUImpl(const ELUOptions& options_ = {});

  Tensor forward(Tensor input);

  void reset() override;

  /// Pretty prints the `ELU` module into the given `stream`.
  void pretty_print(std::ostream& stream) const override;

  /// The options with which this `Module` was constructed.
  ELUOptions options;
};

/// A `ModuleHolder` subclass for `ELUImpl`.
/// See the documentation for `ELUImpl` class to learn what methods it
/// provides, and examples of how to use `ELU` with `torch::nn::ELUOptions`.
/// See the documentation for `ModuleHolder` to learn about PyTorch's
/// module storage semantics.
TORCH_MODULE(ELU);

in

include/torch/csrc/api/include/torch/nn/modules/activation.h

So activation functions seem to be wrapped into function objects that implement forward, but I can’t seem to find the actual implementation. Also, what does TENSOR_MODULE(ELU) do? Register the class somehow somewhere?

I’m also having difficulties finding an actual implementation of the elu funciton.

Specifically,

namespace detail {
inline Tensor elu(Tensor input, double alpha, bool inplace) {
  if (inplace) {
    return torch::elu_(input, alpha);
  } else {
    return torch::elu(input, alpha);
  }
}
} // namespace detail

forwards to torch::elu_ which is then

CAFFE2_API Tensor & elu_out(Tensor & out, const Tensor & self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1);

a CAFFE_2API native function?

So, back to my question, I want to implement a Gaussian as an activation function, what do I need to do?

  1. Implement the function in detail:: ?
  2. Implement the Impl class (function object, module?) ?
  3. “Register” the module?

I mean, I could try copying what I see, and digging through the code, but it would be nice if there was some information on the design available somewhere

I finally found some information in include/torch/csrc/api/include/torch/nn/module.h… reading this now.