Accuracy decreases on validation set with smaller batch sizes

I am training the following model to classify is something is in one state or another. What I’m struggling with is when using smaller batch sizes, my models validation score sits at 50%, so no better than guessing. If I increase the batch size to even just 4, I start getting 85+% accuracy. I’m using BCELoss.

import torch.nn as nn

class Classifier(nn.Module):
    def __init__(self):
        super(Classifier, self).__init__()
        
        self.conv_layers = nn.Sequential(
            nn.Conv1d(3, 6, 5),
            nn.ReLU(),
            nn.Conv1d(6,16,5),
            nn.ReLU(),
            nn.Conv1d(16,32,5),
            nn.ReLU(),
        )
        self.linear_layers = nn.Sequential(
            nn.Linear(32*3,16),
            nn.ReLU(),
            nn.Linear(16,2),
            nn.Softmax(dim=0)
        )
        
        
    def forward(self, x):
        x = self.conv_layers(x)
        x = x.view(-1, 32*3)
        return self.linear_layers(x)

Accuracy is being calculated as follows:

running_acc += torch.eq(torch.argmax(lb, axis=1), torch.argmax(out, axis=1)).sum()
...
acc = running_acc / total_ds_length

Since the label is one hot encoded.

My model is in eval() mode.

I was actually wrong to use BCELoss.
CrossEntropyLoss better suited my application and now provides an accuracy increase of 4% as well over both the validation and test sets with all kinds of batch sizes.

A lesson in understanding the right approach.