Dataloader option for pinned memory

Hello, how can I have a dataloader in c++ load the batches into pinned memory? I have multiple workers loading the batches, is there a way to have these background threads doing the pinning rather than having to pin it myself after popping a batch off the dataloader?

For example, my current dataloader looks like

loader =
      torch::data::make_data_loader<torch::data::samplers::RandomSampler>(
          std::move(c), torch::data::DataLoaderOptions().batch_size(batch_size).workers(num_workers).drop_last(true));

Could I change it to something similar to

loader =
      torch::data::make_data_loader<torch::data::samplers::RandomSampler>(
          std::move(c), torch::data::DataLoaderOptions().batch_size(batch_size).workers(num_workers).drop_last(true).pin_memory(true));

Thanks

I don’t see the pin_memory option listed in DataLoaderOptions so I would assume your code would fail.
If so, then you might need to manually pin the memory on the host for your data.

I see in pytorch an option for pinning memory in the dataloader, do you know if it simply hasn’t been added to libtorch yet, or if there’s another approach needed for the workers to pin the memory? I’m trying to avoid rebuilding from source if possible. My goal is to minimize training time, which is why I’m trying to avoid doing the pinning in the main process.

I would assume this functionality wasn’t ported to libtorch. The Python implementation can be found here for the _SingleProcessDataLoaderIter and here for _MultiProcessingDataLoaderIter. The docs in this source file can also he helpful to understand how this functionality was implemented.