Here is the problem. I am looking for a concise solution to truncate an iterable dataset to an iterable with a set of limited elements (in order to mock this dataset with the real one, and iterate rapidly for debugging).
In the case of “map_style” dataset, we can the method “torch.utils.data.Subset”, to shorten the dataset, which doesn’t seem to work on the other case.
The solution , I thought of was to transform an “Iterable-style” dataset into a “map-style” dataset using the following code before using the “torch.utils.data.Subset” command :
Your idea to process a few samples and to wrap them in a map-style Dataset sounds like a good idea.
However, this line of code:
self.dataset = list(my_dataset)
would try to create all samples, if I’m not mistaken, and could be quite expensive.
Maybe iterating my_dataset for a new steps would work instead which would allow you to wrap the samples into a TensorDataset.
I’m thinking about something like this:
data = 
target = 
dataset_iter = iter(my_dataset)
for _ in range(nb_samples):
x, y = next(dataset_iter)
data = torch.stack(data)
target = torch.stack(target)
self.dataset = TensorDataset(data, target)
Thank you for your reply, it’s a good remark indeed!
I replaced my current solution by adding the following code:
from itertools import islice
m = 5 # number of samples of my new dataset
dataset = list(islice(my_dataset,m))
but I am still not satisfied by my solution, which doesn’t seem clean.
Concerning your proposed solution, although it is cleaner, it works for regular dataset but not for the specified one, because the dataset I am working with outputs a pairs of sentences , of variable length.
If you have any other suggestion, to resolve the problem, I’ll be glad to hear about it.
This would also mean that you are not able to create batches using these samples unless you pad them or return a list, right?
In this case the cleanest approach might be to create a custom IterableDataset using an end counter as shown in e.g. the example from the docs.