Why I only change “shuffle=False” to “shuffle=True” in my DataLoader when I train my model, my loss will be worse?
Are you referring to your training loss or validation loss? The training loss should be worse when you are shuffling your data since there are many more combinations of batches. This is why we shuffle, so the model doesn’t overfit as much to the training data
Thank you for your answer, i reder to my training loss. But I have to set shuffling. Is there any other way to solve this problem?
I would encourage you to set shuffle=True. I would also not be very troubled if the training loss is a bit worse when you shuffle. How much worse is the loss when you shuffle compared to when you don’t? 10%, 100%, 2000%?
thanks ,when shuffling =True , the model can be convergence, but shuffling =False, the loss values are 2-4. Now, i have found a method to set shuffling =True to train my model.