Calculating accuracy gives wonky result

Hi,

I have a neural network that uses PyTorch. When I run it the accuracy seems a bit wonky (constant 1.00).

epoch=20, loss=0.00002, val_loss=33.33334, acc=1.00000, val_acc=0.66667
.
.
epoch=70, loss=0.00000, val_loss=2.69771, acc=1.00000, val_acc=0.66667

Also val_acc doesn’t seem to change. Here’s my code I use to calculate accuracy:

for epoch in range(epochs):
    loss = n_correct = 0
    model3.train()
    for batch, target in train_loader:
        batch,target = batch.view(-1, n_metabolites).to(device),target.to(device)
        batch = batch.view(-1, n_metabolites).to(device)
        optimizer3.zero_grad()
        outputs = model3(batch)
        train_loss = objective3(outputs, target)
        loss += train_loss.item()
        n_correct += (target == (outputs.reshape(-1) > 0.5).float()).sum()
        train_loss.backward()
        optimizer3.step()
    
    loss = loss / len(train_loader)   
    acc = (n_correct.float() / len(train)).cpu().numpy()
    epoch += 1
        
    model3.eval();
    val_loss = val_n_correct = 0
    with torch.no_grad():
        for batch, target in val_loader:
            batch,target = batch.view(-1, n_metabolites).to(device),target.to(device)
            batch = batch.view(-1, n_metabolites).to(device)
            outputs = model3(batch)
            val_loss += objective3(outputs, target)
            val_n_correct += (target == (outputs.reshape(-1) > 0.5).float()).sum()
        val_loss = (val_loss / len(val_loader)).cpu().numpy()
        val_acc = (val_n_correct.float() / len(val)).cpu().numpy()

    
    if (epoch % print_stats_interval) == 0 or epoch == epochs:
        print(f'epoch={epoch:.0f}, loss={loss:.5f}, val_loss={val_loss:.5f}, acc={acc:.5f}, val_acc={val_acc:.5f}')
    log3.append((epoch, loss, val_loss, acc, val_acc))
log3 = pd.DataFrame(log3, columns=['epoch', 'loss', 'val_loss', 'acc', 'val_acc'])

I’ve used similar accuracy calculation before, but now it seems something’s going wrong. I’d be happy if someone can give suggestions!

100% accuracy may just mean your network has learnt perfectly on (or maybe memorized / overfit?) the training training set. I’d be more concerned about the apparent fixed accuracy on the validation set, probably…

Thank you for the reply! Yes, the validation accuracy also concerns me. It’s calculated the same way as the accuracy, I’m planning on running this with larger dataset to see if that would fix the issue (in case it’s due to overfitting), I’ve trained the network with smaller dataset to for shorter run time for testing etc. If you have any suggestions I would be happy to hear them!