Hi,
I have a neural network that uses PyTorch. When I run it the accuracy seems a bit wonky (constant 1.00).
epoch=20, loss=0.00002, val_loss=33.33334, acc=1.00000, val_acc=0.66667
.
.
epoch=70, loss=0.00000, val_loss=2.69771, acc=1.00000, val_acc=0.66667
Also val_acc doesn’t seem to change. Here’s my code I use to calculate accuracy:
for epoch in range(epochs):
loss = n_correct = 0
model3.train()
for batch, target in train_loader:
batch,target = batch.view(-1, n_metabolites).to(device),target.to(device)
batch = batch.view(-1, n_metabolites).to(device)
optimizer3.zero_grad()
outputs = model3(batch)
train_loss = objective3(outputs, target)
loss += train_loss.item()
n_correct += (target == (outputs.reshape(-1) > 0.5).float()).sum()
train_loss.backward()
optimizer3.step()
loss = loss / len(train_loader)
acc = (n_correct.float() / len(train)).cpu().numpy()
epoch += 1
model3.eval();
val_loss = val_n_correct = 0
with torch.no_grad():
for batch, target in val_loader:
batch,target = batch.view(-1, n_metabolites).to(device),target.to(device)
batch = batch.view(-1, n_metabolites).to(device)
outputs = model3(batch)
val_loss += objective3(outputs, target)
val_n_correct += (target == (outputs.reshape(-1) > 0.5).float()).sum()
val_loss = (val_loss / len(val_loader)).cpu().numpy()
val_acc = (val_n_correct.float() / len(val)).cpu().numpy()
if (epoch % print_stats_interval) == 0 or epoch == epochs:
print(f'epoch={epoch:.0f}, loss={loss:.5f}, val_loss={val_loss:.5f}, acc={acc:.5f}, val_acc={val_acc:.5f}')
log3.append((epoch, loss, val_loss, acc, val_acc))
log3 = pd.DataFrame(log3, columns=['epoch', 'loss', 'val_loss', 'acc', 'val_acc'])
I’ve used similar accuracy calculation before, but now it seems something’s going wrong. I’d be happy if someone can give suggestions!