Hi everyone,
I recently came across this approach in TensorFlow that creates a TensorFlow dataset from a list and allows you to batch your data. I want to do a similar thing in Pytorch. I have a dataset that does not fit into memory and I wanted to know if there is an equivalent approach in Pytorch dataloaders?
import tensorflow as tf
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
dataset = dataset.map(lambda x: x*2)
dataset = dataset.batch(64)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)