I am using the PyTorch DataLoader for MNIST dataset. I wanted to know if there is any equivalent of make_one_shot_iterator
in pytorch. All I want to do is make a circular iterator so that we can do multiple passes through the dataset. I know we can achieve this by running multiple training iterations but I wanted to know if there is a way like this.
I’m not sure what make_one_shot_iterator
does, as the docs only mention:
Creates an iterator for elements of
dataset
.
If you want to iterate the PyTorch Dataset
, you could just use it in a loop:
dataset = datasets.MNIST(root='PATH', transform=...)
for data, target in dataset:
# ...
Currently when I iterate on the dataset, with a batch size of 64, it gives me an error at the last batch since the last batch has size 32. I want to know if there is any way I get the iterator to wrap around the dataset so I don’t get this error?
If you are using a DataLoader
, you could drop the potentially smaller last batch via drop_last=True
.