Compute accuracy in the regression

Hello,
I have an excel file that contains two columns, “input” and “label” (examples of the file are in the blow). I want to implement a regression task, and I need to implement the below lines of code in the k-fold cross validation:

Some examples of the dataset:

regression

with torch.no_grad():
    for data in testloader:
        images, labels = data
        # calculate outputs by running images through the network
        outputs = net(images)
        # the class with the highest energy is what we choose as prediction
        _, predicted = torch.max(outputs.detach(), 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')

I think since it is not a classification problem, it is not correct to use this line in the code:

correct += (predicted == labels).sum().item()

Could you please let me know how I can change the codes to get accuracy in this scenario?

Hi @jahanifar

For regression tasks, accuracy isn’t a metric. You could use MSE-

∑(y - yhat)2/ N

where yhat is the predicted outcome and N is the total no. of examples.

So you could do-

MSE = ((torch.pow((predicted - labels), 2)).sum()) / total
1 Like

Thanks a lot, it looks like computing the cost function. hence I think it is related to the loss, not accuracy. Am I right?

Right.
MSE is a popular loss function for regression tasks.
And used as a evaluation metric, too.

While evaluating regression models, you really cannot use accuracy like how it’s defined for classification tasks; as if y = 45.5 and yhat = 45.6, accuracy will term it as a wrong prediction but it is a fairly good prediction for a regression tasks.

Other popular metrics are R2, RMSE, MAE etc. that are similarly defined.

1 Like