How to define the activation function ReLU(x) * ReLU(1-x)?

I want to define the activation function ReLU(x) * ReLU(1-x). But I only know this type:

class Act_op(nn.Module):
    def __init__(self):
        super(Act_op, self).__init__()
    def forward(self, x):
        return x ** 50
class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.block = nn.Sequential()
        for i in range(len(R_variable['full_net'])-2):
            self.block.add_module('linear'+str(i), nn.Linear(R_variable['full_net'][i],R_variable['full_net'][i+1]))
            self.block.add_module('**50'+str(i), Act_op())
        i = len(R_variable['full_net'])-2
        self.block.add_module('linear'+str(i), nn.Linear(R_variable['full_net'][i],R_variable['full_net'][i+1]))
    def forward(self, x):
        out = self.block(x)
        return out

Thanks a lot!!

My code’s purpose is that when I input [1,100,100,1], it creates a DNN of which the structure is (linear(1,100), relu(), linear(100,100), relu(), linear(100,1)). (The example replace relu() with x**50). The net is adaptive. Therefore I use nn.Sequential.add_module() . Now I want to replace relu() with relu(x)*relu(1-x). I don’t know how to do it.

Hello Kejie!

The short answer is that you just do.

torch.nn.functional.relu() (and its class version, torch.nn.ReLU)
is differentiable (in the pytorch sense), so its product is as well, and both
relu() and its product work just fine with autograd and backward().

I don’t understand the point of the code you posted, nor its relevance
to the question in the title of your post, but, quite simply:

import torch
print (torch.__version__)
one = torch.autograd.Variable (torch.FloatTensor ([1.0]))
x = one / 2
torch.nn.functional.relu (x) * torch.nn.functional.relu (1 - x)
x = -one
torch.nn.functional.relu (x) * torch.nn.functional.relu (1 - x)
x = 10 * one
torch.nn.functional.relu (x) * torch.nn.functional.relu (1 - x)

(The nonsense with autograd.Variable is because I’m using pytorch
version 0.3.0.)

Here are the results:

>>> import torch
>>> print (torch.__version__)
0.3.0b0+591e73e
>>> one = torch.autograd.Variable (torch.FloatTensor ([1.0]))
>>> x = one / 2
>>> torch.nn.functional.relu (x) * torch.nn.functional.relu (1 - x)
Variable containing:
 0.2500
[torch.FloatTensor of size 1]

>>> x = -one
>>> torch.nn.functional.relu (x) * torch.nn.functional.relu (1 - x)
Variable containing:
 0
[torch.FloatTensor of size 1]

>>> x = 10 * one
>>> torch.nn.functional.relu (x) * torch.nn.functional.relu (1 - x)
Variable containing:
 0
[torch.FloatTensor of size 1]

Best.

K. Frank

Thank a lot! I think my code is a bit confusing. My code’s purpose is that when I input [1,100,100,1], it creates a DNN of which the structure is (linear(1,100), relu(), linear(100,100), relu(), linear(100,1)). (The example replace relu() with x**50). The net is adaptive. Therefore I use nn.Sequential.add_module(). Now I want to replace relu() with relu(x)*relu(1-x). I don’t know how to do it. I’m sorry I didn’t clarify the problem.