BCEWITHLOGITSLOSS binary classification

Hey guys,
I built a CNN for a binary classification task, so I’m using it as a loss function BCEWITHLOGITSLOSS.
My dataset is unbalanced 24 positive examples 399 negatives; therefore, I want to use the pos_weight parameter to counter this problem.
But I’m not sure if I understood how to use the parameter correctly; here the code for the CNN and the pos_weight initialization.

    def __init__(self):
        super(CNNModel, self).__init__()

        self.conv_layer1 = self._conv_layer_set(1, 32)
        self.norm1 = nn.BatchNorm3d(32)
        self.conv_layer2 = self._conv_layer_set(32, 64)
        self.norm2 = nn.BatchNorm3d(64)
        self.conv_layer3 = self._conv_layer_set2(64, 64)
        self.fc1 = nn.Linear(64 * 1 * 21 * 23, 500)
        self.fc2 = nn.Linear(500, 100)
        self.bn1 = nn.LayerNorm(100)
        self.fc3 = nn.Linear(100, 1)
        self.Re = nn.ReLU()

        # self.softmax = nn.Softmax(dim=1)
        # self.sig = nn.Sigmoid()

    def _conv_layer_set(self, in_c, out_c):
        conv_layer = nn.Sequential(
            nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
            nn.LeakyReLU(),
            nn.MaxPool3d(3, 3),
        )
        return conv_layer

    def _conv_layer_set2(self, in_c, out_c):
        conv_layer = nn.Sequential(
            nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
            nn.LeakyReLU(),
            nn.MaxPool3d(2, 2),
        )
        return conv_layer

    def forward(self, x):
        # Set 1
        out = self.conv_layer1(x)
        out = self.norm1(out)
        out = self.conv_layer2(out)
        out = self.norm2(out)
        out = self.conv_layer3(out)
        out = self.norm2(out)
        out = out.view(out.size(0), -1)
        out = self.fc1(out)
        out = self.Re(out)
        out = self.fc2(out)
        out = self.Re(out)
        out = self.bn1(out)
        out = self.fc3(out)

        return out
pos_weight = torch.Tensor([16]).to(device)
    criterion = nn.BCEWithLogitsLoss(pos_weight=pos_weight)
    optimizer = optim.Adam(net.parameters(), lr=0.1)

You could try looking at this post. You have to have the same number of weights as classes.

thank you for your answer.
In my case i have just to predict 0 or 1, so I assume i just need to use one weight for the positive class. right?

Yes. You’re right you should only need one weight.

1 Like

Hi again, I was wondering in BCEwithlogits loss the weight parameter has a different meaning from the BCEloss classic. The documentation gives a different definition in each of them.
thank you in advance

Hi Alessio!

I’m pretty sure that BCELoss and BCEWithLogitsLoss use
their weight arguments in the same way. On the other hand,
BCEWithLogitsLoss has a pos_weight argument, while BCELoss
does not. I don’t know of any good reason for this – I’ve always
assumed that it’s just an oversight.

Best.

K. Frank

1 Like

Thanks for the answer i will try to use the weights param and check how it goes.