Why dataset needs to be moved to loader?

Hello,
I am wondering why the dataset needs to be moved to the loader? Indeed, all examples for creating a loader are like this:

auto loader = torch::data::make_data_loader<torch::data::samplers::RandomSampler>(std::move(dataset), batch_size);

If we don’t move the dataset to the loader, like this:

auto loader = torch::data::make_data_loader<torch::data::samplers::RandomSampler>(dataset, batch_size);

everything seems to work fine, and we moreover still have access to the dataset, for example to request its size with dataset.size().value(). If the dataset is moved to the loader, the size request returns 0.
So, can anyone explain me why moving the dataset to the loader is mandatory, and what problems I might expect if I don’t do it?
Thank you very much for your help.

Answering to myself…
Not moving the dataset means that the loader makes a copy of the dataset, which can lead to memory overflow for huge datasets. Moving the dataset means that the memory allocated for it is passed to the loader, ensuring that we do not use twice the memory needed. But then, the pointer to the dataset is lost in the main program, which implies that its size was stored in a variable (if needed) before the loader is constructed. But in the normal flow of learning, there should normally be no need to still have a reference to the dataset after loader creation.
Hope I am right, and hope it helps.

The underlaying of the data loader constructor is:

DataLoaderBase(
      DataLoaderOptions options,
      std::unique_ptr<Dataset> main_thread_dataset = nullptr)
      : options_(std::move(options)),
        main_thread_dataset_(std::move(main_thread_dataset)),
        sequencer_(new_sequencer()) {}

and the torch::data::make_data_loader<torch::data::samplers::RandomSamper> is a generic wrapper which points towards another torch::data::make_data_loader for stateless.

template <typename Dataset, typename Sampler>
torch::disable_if_t<
    Dataset::is_stateful,
    std::unique_ptr<StatelessDataLoader<Dataset, Sampler>>>
make_data_loader(Dataset dataset, Sampler sampler, DataLoaderOptions options) {
  return torch::make_unique<StatelessDataLoader<Dataset, Sampler>>(
      std::move(dataset), std::move(sampler), std::move(options));
}

the StatelessDataLoader base class is DataLoaderBase.

As you can see the Dataset is hold by a std::unique_ptr.

So, the std::move or not std::move is not really a problem or a choice. It depends on the permissiveness of the compiler, some compilers make the move explicit, others consider it implicit.

Yours seems to handle this implicitly.