is the following the recommended way to do it in pytorch:
for epoch in range(nb_epochs):
for train_data, test_data in zip(trainloader, testloader):
is this the pytorch way to do it? There isn’t any inefficiencies or any subtle things I should worry about?
depending on how many elements large trainloader and testloader are, you could use the itertools zip: https://docs.python.org/2/library/itertools.html#itertools.izip
I want to track the train and test error at the end of each epoch.
what would be the difference of using
If you’re using Python 3, you should use
zip returns an iterator.
If you’re using Python 2, you can get some memory gains by using
zip. This is because in Python 2
zip returns a list instead of an iterator, while
izip returns an iterator. If you’re iterating over very long lists, the lists will take up memory.
but in the context of DL,
zip truncates my training to the size of the test set cuz test sets are usually smaller. I am sure someone has dealt with this before me…
I’m not sure what you’re looking for but Itertools has a zip_longest method that might be helpful: https://docs.python.org/3.0/library/itertools.html