I have done a lot of searching around trying to find answers to this issue that I face. I have tried reddit, SO and just searching on google but to no avail. I am having issues with my results showing the following:
This is my code that I am using right now:
I have tried changing the learning rate, batch size and I have also tried multiplying the train_acc and train_loss by 100 after dividing by the number of images. None of this works and still produces the unusual results.
Please help me figure out what I am doing wrong !!!
Thank you very much,
Could you check the type of
(prediction == labels.data)? If it’s a
ByteTensor, could cast it to
float before summing it?
Also could you divide by a float number, i.e.
4242. instead of
maybe make sure the types match in
torch.sum(prediction == labels.data) and then when you normalize, cast it as float:
acc = torch.sum(prediction == labels.data).float() / number_of_examples * 100
Thank you so much. I did this and it seems to have done the job. I am now getting better readings !!!
sum() worked before. but doesn’t work now. use torch.sum(), any differences between sum() and torch.sum()
There might be some other issue then.
torch.sum(sometensor) should yield the same results. Maybe you are calling .sum() on a non-pytorch tensor (e.g., a numpy array)?