and was getting weird errors. How do I know that the dataloader is actually using the right data set if both are using the same path? (beyond the fact that the argument train=False, is that enough)?
I am sorry about my attitude, and should have say it in better ways. I apologize for it.
However, I still think that the flag is very clearly named, and it makes sense for a directory root to contain both train and test splits. But this doesn’t justify my attitude. I’m sorry.
The reason I asked is because I have nearly zero test error which makes no sense. So I just didn’t know where to look anymore…my question also reflects my own frustration (as yours) because this is very simple and very clearly labeled.
The comparision (max_indices != labels) returns a torch.ByteTensor, which can overflow using your batch size of 10000.
Adding a .float to this line (max_indices != labels).float().sum()... will give a train error of ~0.622 and a test error of ~0.640.
Did you not get an error, since I got a RuntimeError when trying to run your code:
RuntimeError: value cannot be converted to type uint8_t without overflow: 8821
no, I never got a run time error weird. The printed errors is what my code and GPU produced. My version of pytorch are:
torch (0.3.1.post2)
torchvision (0.2.0)
Is the way Im tracking the errors wrong (or perhaps slow). Is this what I’m suppose to be doing or how is it suppose to be done? (which pytorch just had its own built in class for this [thnx so much for the help and your humor! ] )
Simon so the error is the way I compute the error. Do you have any advice on it? What is the standard thing to do in pytorch? I am sure there is something right? Im not the first one tracking the error. Is there no error class or module pytorch provides that is error free?