I am trying to constraint the final layer of my NN to have non negative weights in the final layer, for my binary classification task ( the reason for me wanting to have non negative weights does not matter right now)
This is basically what my code looks like :
class Classifier(nn.Module): def __init__(self, in_dim, hidden_dim1 , hidden_dim2 , hidden_dim3 , n_classes): super(Classifier, self).__init__() # other layers self.classify = nn.Linear(hidden_dim3 , n_classes) def forward(self, g , h ): # other layers hg = self.classify(h) self.classify.weight.data = self.classify.weight.data.clamp(min=0) hg = torch.sigmoid ( hg ) return hg
so am i doing this right? is this proper way of forcing the final layer to only have positive weights and therefore only looks for “positive” features to do classification ?
wouldn’t there be problems because sigmoid with only positive input only outputs +50% probabilities? the bias should fix this problem, right?
Note that keras has
which does the same thing and i am trying to do that in pytorch.