How can I implement a chunked dataset in Torch?

I want to efficiently load a dataset that has many samples and is larger then my system’s memory.

TensorFlow has TFRecords which saves many samples into a single TFRecord. TensorFlow’s data pipeline reads in the files, shuffles the data in each TFRecord and then passes it to the network. Since many samples are combined into a single file, the data pipeline spends less time loading individual files.

Is there an equivalent in Torch?

I have seen some people implement TFRecord readers for Torch. However, I would think there are other ways to accomplish the same capabilities in Torch. I think I could implement something similar using an iterable dataset but I feel that such capabilities should already exist.