Custom activation function inside nn.module

I want to create custom activation function inside “class gen(nn.Module):”. I want to apply this activation function after layers define by “nn.sequential” such that loss calculated from output of custom activation function will be backpropagate.

Activation function code is “z=torch.where(x > 0.1, x, zero_tensor)”

How should i define custom activation function inside “class gen(nn.Module):” such that loss will be backpropagate? I need many layers of such activation function.

Should i need to use “forward” function instead of “my_act” or something else.

Independent running code of activation function is as below:

class act():

def my_act(x):
zero_tensor = torch.zeros(x.size())
z=torch.where(x > 0.1, x, zero_tensor)
return z