Binary cross entroy loss and validation

Hi all,

I have a quick question regards to binary cross-entropy classification. I am very new to this field.

At the moment I am working on time-series data fed into LSTM. However, I am not certain about my method of validation and loss as to what to expect.

Below is my code for loss and accuracy.

My true positive output happens once in every 100~120 frames, so I penalise with weight 100.

Should I expect my loss to be something like 0.00xxx if I do enough training? and does my accuracy make any sense to you all?

           self.optimizer = optim.Adam(self.model.parameters(), lr=lr)
           self.pos_weight_factor = torch.tensor(100)
           self.criterion = nn.BCELoss(reduction='none')

            predictions = self.model(data.float())

            loss_get = self.criterion(predictions.float(), target.float())
            loss_flat = loss_get.flatten()
            target_flat = target.flatten()
            loss_flat[target_flat == 1] *= self.pos_weight_factor
            loss = loss_flat.mean()


            pred = predictions.detach()
            pred[pred >= 0.5] = 1
            pred[pred < 0.5] = 0

            correct_indx = pred[target == 1]
            out_size = correct_indx[correct_indx == 1]
            real_indx = target[target == 1]
            accuracy = out_size.shape[0] / real_indx.shape[0]


For an imbalances use case, the accuracy might be misleading due to the Accuracy paradox.
In this case it would be better to observe the confusion matrix and other stats such as Sensitivity, Specificity, etc.