Linear activation function

Hi, I am training a custom CNN, I need to use a linear activation function. but I didn’t find anything in pytorch. I khow this activation just pass the input to the output of it, so should I use nn.Linear(nin, nin) or nn.Identity() or do nothing?
while I am training my network, the training and validation is nearly constant and I think this is cause of bad usage of my activation functions
Here is my model:

and here is my code:

class Block(nn.Module):
    def __init__(self, in_channels, out_channels, exp=1, stride=1, type=''):
        self.t = type
        self.stride = stride, self.outc = in_channels, out_channels
        self.exp = exp

        self.blockc = nn.Sequential(
            nn.Conv2d(,* self.exp, kernel_size=1),
            nn.Conv2d( * self.exp, * self.exp, kernel_size=3, groups= * self.exp, stride= self.stride, padding=1),
            nn.Conv2d( * self.exp, self.outc, kernel_size=1),
    def forward(self, x):
        out = self.blockc(x)
        if self.t == 'A':
            out = torch.add(out,x)

        return out

class Model(nn.Module):
    def __init__(self):
        self.conv2d1 = nn.Conv2d(in_channels=1, out_channels=8, kernel_size=3,padding=1, stride=2)
        self.r = nn.ReLU6()
        self.stage1 = Block(8, 8, exp=1, stride=2, type='C')
        self.stage2 = nn.Sequential(
            Block(8, 16, exp=2, stride=2, type='C'),
            Block(16, 16, exp=2, type='A'))
        self.stage3 = nn.Sequential(
            Block(16, 24, exp=2, stride=2, type='C'),
            Block(24, 24, exp=2, type='A'))
        self.post_block2 = Block(24, 32, exp=2, type='B')

        self.fc = nn.Linear(128, 10)
    def forward(self, x):
        out = self.conv2d1(x)
        out = self.r(out)
        out = self.stage1(out)
        out = self.stage2(out)
        out = self.stage3(out)
        out = self.post_block2(out)

        out = out.view(-1, 128)

        out = self.fc(out)
        return out

Can you define that mathematically?
The concept itself sounds like you wanna do y=mx+n where m=1 and n=0, so basically the identity operator.

Following on from what @JuanFMontesinos has already stated, @H_MP please share some code so we can better understand your problem. Usually, with a ‘linear’ activation function, you can just “do nothing” and return the input and that’s fine.

But do share some code (and wrap it in 3 backticks ``` to get the correct indentation) so your problem can be solved.

Here exp means that the output of the first Conv1x1 in Block class is => out_channel = in_channel * exp

Make sure to properly define your parent classes as that can cause problems as your class isn’t technically a nn.Module.

Your nn.Identity() question is about the self.blockc defintion?

Thank for your reply, Yes, I want to use Linear activation function, so I don’t know how to use it

I think you can just remove the nn.Identity function as there’s no need for it.

Also make sure to properly define your parent class, i.e. change super().__init__() to super(Block, self).__init__() for Block and the same for Model.

actually, removing the nn.Identity solved my problem. Thanks a lot