Calculating the model overall accuracy

Sorry, if this is a very simple question. I would like to calculate the over all accuracy of a model during training. I’m writing to a tensor-board test loss, and test count error. What do i need to do to get the overall accuracy across the whole training please? Many Thanks

loss = torch.mean((preds - targets)**2)
        count_error = torch.abs(preds - targets).mean()
        mean_test_error += count_error
        writer.add_scalar('test_loss', loss.item(), global_step=global_step)
        writer.add_scalar('test_count_error', count_error.item(), global_step=global_step)
        
        global_step += 1
    
    mean_test_error = mean_test_error / len(loader_test)
    print("Test count error: %f" % mean_test_error)
   
    if mean_test_error < best_test_error:
        best_test_error = mean_test_error
        torch.save({'state_dict':model.state_dict(),
                    'optimizer_state_dict':optimizer.state_dict(),
                    'globalStep':global_step,
                    'train_paths':dataset_train.files,
                    'test_paths':dataset_test.files},checkpoint_path)

You could store the number of correctly predicted samples during training and divide it by the total number of samples after each epoch is done.
Also, the ImageNet example uses some utility classes to track the losses and accuracy, which might be useful for your use case.