Defining custom activation function

Is there any way that I restrict nn.parameter to some specific interval that the parameters do not go beyond that?

I checked with this, there is no parameter in the activation function to train.

You’d transform them in forward(), usual ways are weights = self.weights.exp(), self.weights.sigmoid()*scale+shift, torch.lerp(min, max, self.weights.sigmoid()).

About Variable - just don’t use that obsolete and by now obscure thing…

What’s is the precise and exact usage of Variable?

There is no usage of Variables anymore, as they were deprecated in PyTorch 0.4 :stuck_out_tongue:

In newer versions you can just use tensors and set requires_grad = True, if needed.

hi Ptrblck,

I want to customized the Sigmoid function for example for lower 10% of the max (input) be zeros and for upper than 80% max (input) be 1.
and call it in the sequential . Would you please help me with that? for the model like that

class Discriminator(nn.Module):
   def __init__(self, ngpu):
       super(Discriminator, self).__init__()
       self.ngpu = ngpu

       self.l1= nn.Sequential(nn.Conv2d(1, ndf, 4, 2, 1, bias=False),nn.LeakyReLU(0.2, inplace=True))
           # state size. (ndf*4) x 8 x 8
       self.l2= nn.Sequential(nn.Conv2d(ndf * 4, 1, 4, 2, 1, bias=False),nn.Sigmoid())

   def forward(self, x):
       
        
        out = self.l1(x)

        out=self.l2(out)
     
        return out

You could calculate the current min and max values from the input activation using torch.min and torch.max. Once you have these values, you could use e.g. torch.threshold to assign the new values.

I wrote this code. Indeed, I want to get output 1 for upper than 0.9 of the max and for others output should be the Sigmoid.Is this code correct? I think I need to put if in the defined custom function.

class Discriminator(nn.Module):
    def __init__(self, ngpu):
        super(Discriminator, self).__init__()
        self.ngpu = ngpu

        self.l1= nn.Sequential(nn.Conv2d(1, ndf, 4, 2, 1, bias=False),nn.LeakyReLU(0.2, inplace=True))
        self.l2= nn.Sequential(nn.Conv2d(ndf * 4, 1, 4, 2, 1, bias=False),nn.RectifiedSigmoig())

    def forward(self, x):
        
         
         out = self.l1(x)

         out=self.l2(out)
      
         return out
    
class RectifiedSigmoig(nn.Module):
    import torch
    import torch.nn as nn
    def __init__(self,input):
        super().__init__()
        self.input = input
        
    def forward(self, input):
        mm=torch.max(input)
        m = nn.Threshold(0.9*mm, 1)
        return(m(input))
        else:

         return torch.sigmoid(input)

Please note carefully how the nn.Threshold works. It replaces the value if the number is less than a threshold not when it (the value) is greater than a threshold. So, in your case, all values less than 0.9 times of the max will be set to 1.

@saba If you like, I have a really crude code here of what you are looking for:

import torch

a = torch.Tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
m = torch.nn.Threshold(0.91*min(-a), -10000000000.)
b = -m(-a)
# print(b)
print(torch.sigmoid(b))

hello @harsha_g ,Thanks for your help. Indeed, I want the output be 1 for the input lower than 90% of the max and for the rest apply Sigmoid.I wrote this class and call it . But it give me error" nn.ConvTranspose2d( ngf, 1, 3, 2, 3, bias=False),RectifiedSigmoig()
TypeError: init() missing 1 required positional argument: ‘input’"

class Generator(nn.Module):
    def __init__(self, ngpu):
        super(Generator, self).__init__()
        self.ngpu = ngpu
        self.main = nn.Sequential(
            # input is Z, going into a convolution
            nn.ConvTranspose2d( nz, ngf * 8, 3, 1, 0, bias=False),
            nn.BatchNorm2d(ngf * 8),
            nn.ReLU(True),
            # state size. (ngf*8) x 4 x 4
            nn.ConvTranspose2d(ngf * 8, ngf * 4, 3, 1, 0, bias=False),
            nn.BatchNorm2d(ngf * 4),
            nn.ReLU(True),
            # state size. (ngf*4) x 8 x 8
            nn.ConvTranspose2d( ngf * 4, ngf * 2, 3, 1, 0, bias=False),
            nn.BatchNorm2d(ngf * 2),
            nn.ReLU(True),
            # state size. (ngf*2) x 16 x 16
            nn.ConvTranspose2d( ngf * 2, ngf, 3, 2, 1, bias=False),
            nn.BatchNorm2d(ngf),
            nn.ReLU(True),
            # state size. (ngf) x 32 x 32
            nn.ConvTranspose2d( ngf, 1, 3, 2, 3, bias=False),RectifiedSigmoig()
#            nn.Tanh()
            # state size. (nc) x 64 x 64
        )

    def forward(self, input):
        return self.main(input)

## ---------- defined sigmoid--------

class RectifiedSigmoig(nn.Module):
    import torch
    import torch.nn as nn
    def __init__(self):
        super().__init__()
        self.input=input
        
    def forward(self, input):
        mm=torch.max(input)
        m=0.91*mm
        out1=torch.zeros(64,1,21,21)
        for ii in range(input.size(0)):
            gg=[]
            gg=input[ii,:,:,:]
            gg=gg.squeeze(0)
            for ii1 in range(gg.shape[1]):
                for ii2 in range(gg.shape[2]):
                    if gg(ii1,ii2)>m:
                        bb=1
                    else:
                        bb=torch.sigmoid(ii1,ii2)
                    out1[ii,1,ii1,ii2]=bb
                
        return out1

Probably it’s using an old definition from the cache. I don’t see any issues with the class instantiation when I ran it on my machine. Except that, you might want to double-check your logic for the RectifiedSigmoig function. Did you run any tests after writing your function?

HI Ptrblck,

I need to get minimum on the output for each dimension with value of 1 and then rescale it, I need to keep the size which is 64x1x11x11x11, but it is very time consuming to do for loop. How I can do that without for loops? is " result=((min(Eta*(Out1),1)))*0.995+0.005" true?

def SigmoidClip(Eta,SHit2,Max,Min,Input):

        Out1 = (1/(1+torch.exp((-1*(Input))+SHit2)))

        result=((min(Eta*(Out1),1)))*0.995+0.005
        
        return result