Am I measuring Accuracy Wrong

So I have the following model:

model = Net(n_x, n_h, n_y)  
optim = torch.optim.ASGD(model.parameters(),lr=0.005)
 
loss_function = nn.BCELoss() 

and this is the way I am training it:
I am basically trying to count the times my algorithm is correctly predicting the output.

train_losses = []
accuracy = []


for epoch in range(epochs):
 
 
  model.train() 
  train_loss = []
  batch_accuracy = []
  for idx in range(0, train_x.shape[0], batch_size): 
    batch_x = torch.from_numpy(train_x[idx : idx + batch_size]).float() 
    batch_y = torch.from_numpy(train_y[:,idx : idx + batch_size]).float() 
   
    model_output = model(batch_x) 
    batch_accuracy=[]
  
    loss = loss_function(model_output, batch_y) 
    train_loss.append(loss.item())
    labels_normalized=list()
    count=0
    #Here I am checking the output of my label against the ground truth
    for i in range(0,len(model_output)):
      if(model_output[:,i]>0.5 and batch_y[:,i]>0):
        count+=1
      elif((model_output[:,i]<0.5) and (batch_y[:,i]==0)):
        count+=1
      else:
        continue
        
    

    
Needless to say,
        
    
    optim.zero_grad()
    loss.backward()
   
    optim.step()
      
  if epoch % 100 == 1:
    print("Iteration : {}, Training loss: {} ".format(epoch,np.mean(train_loss)))
    train_losses.append(train_loss)
    #Trying to print the count here
    print(count)

  


    

plt.plot(np.squeeze(train_losses))

plt.ylabel('loss')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()



As you can see above I am trying to “count” the number of times the label is the same as the predicted label. However, in the print statement, I keep getting 1, in other words, my accuracy in training is not improving. Needless to say, my loss is improving and getting lower quite significantly so I don’t think it is that issue. Also just trying to measure my training accuracy here.

I guess the indexing might be wrong in this loop:

    for i in range(0,len(model_output)):
      if(model_output[:,i]>0.5 and batch_y[:,i]>0):
        count+=1
      elif((model_output[:,i]<0.5) and (batch_y[:,i]==0)):
        count+=1
      else:
        continue

While i will take numbers in range(model_output.size(0)), you are indexing at dim1, which seems wrong.
Usually you don’t need the loop, but can directly compute the accuracy via:

preds = outputs > 0.5
nb_correct = (preds == batch_y).sum()

assuming a sigmoid was applied on outputs.

PS: It’s generally better to use nn.BCEWithLogitsLoss and pass the raw logits into this criterion for numerical stability.

Awesome.I think it is working (finally).

Now that you have seen the code I had three questions:

  1. The batch accuracy does not represent the accuracy of our model as it can be very inaccurate in the first iterations. Ideally, the accuracy should be measured in the last epoch where most of the “learning” has been done. How can I move from the batch accuracy to a more general model accuracy that can even later be applied to testing?
    2.Is the Loss function you mentioned just a binary cross-entropy loss?
    3.Any tips to improve the model generally?I have a relu->sigmoid two-layer NN and have to stick to a very specific architecture.
  1. Calculate the running accuracy during training and calculate the validation accuracy after each training epoch. This should give you a good signal of your model performance.

  2. nn.BCEWithLogitsLoss is theoretically equal to nn.BCELoss + torch.sigmoid, but the former approach is numerically more stable.

  3. Not sure and it depends on your model as well as the loss curves. E.g. if your model is overfitting, increase the regularization. If it’s underfitting add more capacity to it. Since it seems you have to stick to a specific architecture, your options might be limited.

1 Like