Accuracy in Pytorch

How can i measure the accuracy of cluster prediction in pytorch?

Could you explain a bit more on the topic of cluster prediction?
I search for it, but couldn’t find anything useful.
Do you want to create clusters using unsupervised learning?

I have clusters output in the form of 0’s and 1’s and i want to make two separate classes based on 0’s and 1’s.
How can i do this?
Kindly help

I assume you have input data and a clustering algorithm, that returns the corresponding cluster for each sample.
If that’s the case, you can simply split the data using this code:

data = torch.randn(100, 2)
clusters = torch.empty(100, dtype=torch.long).random_(2)

data_class0 = data[clusters==0]
data_class1 = data[clusters==1]
1 Like

Can you please tell me, how can i measure the goodness of this code?

Do you mean accuracy by goodness or what is your definition?
If so, you would need a target label and compare the “predictions” with the labels.

Can you show me a sample code please?

Assuming clusters are your predictions:

clusters = torch.empty(100, dtype=torch.long).random_(2)
targets = torch.empty(100, dtype=torch.long).random_(2)

accuracy = targets.eq(clusters).float().sum() / clusters.size(0)
1 Like

what is the acceptable range of my code accuracy, i mean to say what is the minimum percentage of accuracy so that my code will be accepted?

Also, Can you please tell me. Does the accuracy should be visible in both training and testing?

I don’t know your use case and thus cannot tell what accuracy is “acceptable”.
The performance of your model and processing pipeline will be measured by the test accuracy.
While measuring it you should take care of avoiding data leaks, i.e. tuning your hyperparameters and training using the test accuracy.

Thanks for reply
Can you guide me what can i use instead of tensorboard if i have no gpu?

You don’t need a GPU to use Tensorboard or a PyTorch compatible wrapper.
Anyway, I also recommend using Visdom.

1 Like

CAn you explain loss function use in code?

Have a look at this tutorial on training a classifier.
The loss function is defined for your use case, e.g. a classification or regression, and is used e.g. to calculate the gradients.