Training accuracy always 0? Need help please

I have done a lot of searching around trying to find answers to this issue that I face. I have tried reddit, SO and just searching on google but to no avail. I am having issues with my results showing the following:

https://pastebin.com/CbjY80Qe

This is my code that I am using right now:
https://pastebin.com/9QbpV4h8

I have tried changing the learning rate, batch size and I have also tried multiplying the train_acc and train_loss by 100 after dividing by the number of images. None of this works and still produces the unusual results.

Please help me figure out what I am doing wrong !!!

Thank you very much,

Aeryes.

Could you check the type of (prediction == labels.data)? If it’s a ByteTensor, could cast it to float before summing it?
Also could you divide by a float number, i.e. 4242. instead of 4242?

2 Likes

maybe make sure the types match in torch.sum(prediction == labels.data) and then when you normalize, cast it as float:

acc = torch.sum(prediction == labels.data).float() / number_of_examples * 100
3 Likes

Thank you so much. I did this and it seems to have done the job. I am now getting better readings !!!

This helps.
sum() worked before. but doesn’t work now. use torch.sum(), any differences between sum() and torch.sum()

There might be some other issue then. sometensor.sum() and torch.sum(sometensor) should yield the same results. Maybe you are calling .sum() on a non-pytorch tensor (e.g., a numpy array)?