Accuracy change again and again

why accuracy change again and again. i run same code for 3 times and getting 3 different
accuracies.
1st time:33%
2nd time:66%
and 3rd time:87%.
kindly tell me what is happening.

Please provide what you are working on. Simply asking accuracy will make people confuse.

i am working on CNN.

There are random factors at play. How you batch your data, how you initialise your weights etc. To get the same result you can seed all the things that are random. torch.manual_seed(0) for example. You might have to do that for numpy, torch.cuda and other libraries using random

With such extreme changes, I guess the number of test samples are very few, so a small change in model can cause a large change in performance.

Can you add more info, what model, how are you training, what is the training and the test set statistics?

thanks for reply. can u tell me the effect of batch size.it also metter for accuracy??
i have 3064 images dataset,what should be the batch size for that number of images?

thank you for quick response

I was talking about the size of the whole test data, not the batch-size.

So, how many of this 3064 do you use for training and what portion for testing?

1500 use for training and 600 for testing.

That doesn’t add up to 3064! :wink:

Same like @Oli said. Check Using the same start parameters again to use the same parameters again and again. :slight_smile:

i am not using whole data set i just use the 1500 for training and 600 for testing.

thanks for response.