Hi, I want to load last saved batch id and batch data from Dataloader.
batch_id = checkpoint['batch_id'] #value from last iteration
batch_data = checkpoint['batch_data'] #value from last iteration
for batch_id, batch_data in enumerate(train_iter): #want to load train_iter with above values
In my opinion, you should make sure to get samples in the same order, for that you can set she dataset shuffle=False or use a sampler to get samples, and then skip the first batch_id of the dataset, so that we can continue the last training phase state.
But why do you want to save the last training batch_id or batch_data, if you set dataset shuffle=True, you just need to complete the remaining epochs you don’t complete in your last training.
Thank you for your reply, in my case I am concatenating two datasets , though shuffle is by default set to false but still while concatenating it shuffles data from both datatsets I followed your first suggestion and skipping the batch_id which already iterated and not saving batch_data now.