Returning more than 2 tensors in libtorch

Hello all ,
I am writing a simple dataloader for an object detection task and trying to return 3 tensors from the get method. However I intended to return a tuple containing all three but I am unable to do so as the return type is torch::data::Example. Could anyone please guide me as to how I can return all three tensors?

Wouldn’t returning all 3 tensors work with torch::data::Example as given e.g. in the get method of your dataset:

torch::data::Example<> get(size_t i) {
    return {torch::ones(i), torch::ones(i), torch::ones(i)};

I did try that but it threw an error saying no matching constructor found and I was only able to return at most two tensors.
Anyway I will try that once again. Thanks a lot for replying ptrbk.

I just stumbled across this exact same issue when trying to create a custom dataset that returns 3 tensors vs. 2. I’m not crazy well versed in C++ but it looks like @ptrblck’s solution wouldn’t work (and didn’t for me at least) according to the source code. I think we’re limited to 1 or 2 outputs based on the templates in that header. I’d be interested to hear of any potential workarounds or solutions to this issue.

Did you try to write a custom Example class with more than two return types?
The linked class shows:

template <typename Data = Tensor, typename Target = Tensor>

so I would assume you can reuse this template and add more types to it?

Wow that was a quick response! I’m actually working on that approach now. I’ll see how it goes and report back with what I find.

I’m mostly a Python dev though so I’m a bit out of my element here.

EDIT: It looks like writing a custom Example class and providing it as a template argument when creating a custom dataset subclass (per this line here) should work. I’ll write up a minimal example here when I figure it out.

1 Like

Apologies in advance if this doesn’t compile directly on a copy/paste. I wrote this based loosely on my actual implementation but haven’t compiled or tested this exact bit of code. I also haven’t fully tested this approach in a training loop but the approach compiles and can be used to do batched inference with my TorchScript model. Note that I had to create a custom Example and custom Stack class to accommodate the extra model input in transforms as well.

I’m not great with C++ but I’m wondering if there’s some way that parameter packs/variadic templating can be used to remove the 2 input limit on custom datasets? I started probing that with the Example class but don’t have any strong ideas on how to carry that forward into the transforms or anything.

It looks like and are used quite often in examples, transforms, etc. vs. taking the indexing/unpacking approach to getting inputs from a batch in Python. Maybe it’s worth exploring the Python approach here as well.

#include <tuple>
#include <vector>
#include <fliesystem>

#include <torch/torch.h>

template <typename Data = torch::Tensor, typename Target = torch::Tensor,
          typename Mask = torch::Tensor>
struct Example {
  using DataType = Data;
  using TargetType = Target;
  using MaskType = Mask;

  Data data;
  Target target;
  Mask mask;

  Example() = default;
  Example(Data data, Target target, Mask mask)
      : data(std::move(data)), target(std::move(target)),
        mask(std::move(mask)) {}

template <typename ExampleType>
struct Stack : public torch::data::transforms::Collation<ExampleType> {
  ExampleType apply_batch(std::vector<ExampleType> examples) override {
    std::vector<torch::Tensor> xs, ys, masks;
    for (auto &example : examples) {
    return {torch::stack(xs), torch::stack(ys), torch::stack(masks)};

using MyExample = Example<torch::Tensor, torch::Tensor, torch::Tensor>;
class MyDataset : public torch::data::datasets::Dataset<MyDataset, MyExample> {
  std::vector<torch::Tensor> xs, ys, masks;

  MyDataset(const std::filesystem::path dataFpath) {
    xs, ys, masks = loadDataFromFile(dataFpath);
  virtual ~MyDataset();
  MyExample get(size_t index) override {
    torch::Tensor x = xs[index];
    torch::Tensor y = ys[index];
    torch::Tensor mask = masks[index];

    return {x, y, mask};
  torch::optional<size_t> size() const override { return xs.size(); }

auto main() -> int {
  const std::filesystem::path dataDir = "/path/to/data/";
  const int batchSize = 32;
  auto dataset = MyDataset(dataDir).map(Stack<MyExample>());
  auto dataLoader =
          std::move(dataset), batchSize);

Thanks for sharing your approach!
I’m sure the libtorch interface could be polished a bit more, but I also think that training in C++/libtorch is still an edge case (compared to inference) which is why the training API might not have gotten a lot attention.

I agree on both fronts. There aren’t many usecases where training in C++ makes more sense than training in Python and converting a model to TorchScript for inference in C++.

I’m working on model training in C++ for an iOS app ultimately. Unfortunately I’ll need to be able to do on device training for my usecase so I’m certain I’ll come across a lot of unpolished edges as I move forward. I’ll be sure to open issues and make PRs where I can along the way.

Thanks for the help on this!

1 Like