Custom Nonlinear Activation Function that's not "1-to-1"

The nonlinear activation functions typically used in pytorch that I am familiar with are 1-to-1 functions, like arctan, sigmoid, relu, etc.

Is it possible to have a custom nonlinear activation function that depends on multiple arguments?

So for example instead of feeding the result of a dot product into the activation function:
$output = f_{nl}(x \cdot A)$

the output is just come custom nonlinear function $f_{nl}(x, A)$, say for example:
$output = \sum_i^N (sum_j cos(A_i_j x_j - j/N) x_i)$

Is it posssible to use such a complex activation layer in a NN in pytorch?

Hi @stevensagona,

You can always define your custom function as an nn.Module object, and just use it inside your model, e.g. pass an instance of the function class, and call it llike you would do with nn.Tanh()(x) for example.

The example you gave is a one-input to one-output nonlinear function.

I am interested in custom functions that are not one-input to one-output, but are many-inputs to one-output.

Is it straightforward to do a more complicated function that takes multiple weights and inputs and combines them to make one output, like in the example I gave.

And would it be straightfoward to train such a thing?

As I stated in my previous answer, you can define the custom function as an nn.Module object and it’ll be fine. Use it like you would with any other activation function, e.g. nn.Tanh().