An inquiry about computing classification accuracy with output in C++

Hi, All

I have an inquiry about computing the classification accuracy with output and ground truth in C++,
in python it is:
acc = torch.mean((output_label.max(1)[1] == y).float())

where output_label is of the size 64 by 10, here 64 is the batch size and 10 is the number of classes, and y is of the size 64.

In C++, I convert it to be
auto labelmax=output_label.max(1);

    auto acc=torch::mean(get<1>(labelmax)==y);

but I get a running error when computing acc as labelmax is a tuple, saying :

terminate called after throwing an instance of ‘c10::Error’
what(): Can only calculate the mean of floating types. Got Bool instead.
Exception raised from mean_out_cpu_gpu at /pytorch/aten/src/ATen/native/ReduceOps.cpp:507

Any suggestions to fix that please ?

Thanks.

get<1>(labelmax)==y returns a boolean tensor, but torch::mean support floating type tensor only, you have to convert this boolean tensor into float type, just like what you did in python.

Try this:
auto acc=torch::mean((get<1>(labelmax)==y)).to(torch::kFloat))

Thanks.

I revised with your comments and it works. Very helpful :slight_smile: