Loading tensors from pytorch to torch C++

Hi,
I am trying to load tensor from pytorch to c++, following suggestions this github issue. However, it looks like torch::jit::script::Module no longer has get_attribute and I having a tough time understanding how to get named parameters using the API.

Can you please suggest how to get named_parameters with the current API?

Hey sorry for the confusion here. The TorchScript C++ API is still experimental and we made some changes to the API, we will update the doc in the next release. To answer your question, we are trying to mimic the module API in python, so you can get the named parameters via named_parameters() call in C++, see a list of methods here

Thank you for your response. i was able to get the values using attr but I still could not understand how to extract values from slot_iterator_impl.
If its not too much trouble, can I please request sample code to extract for the same example (reproduced below)? It can help me better understand the codebase!

Thank you again!

#include <torch/script.h>

#include <iostream>
#include <memory>

int main(int argc, const char *argv[]) {
  torch::jit::script::Module container = torch::jit::load("container.pt");

  // Load values by name
  torch::Tensor a = container.get_attribute("a").toTensor();
  std::cout << a << "\n";

  torch::Tensor b = container.get_attribute("b").toTensor();
  std::cout << b << "\n";

  std::string c = container.get_attribute("c").toStringRef();
  std::cout << c << "\n";

  int64_t d = container.get_attribute("d").toInt();
  std::cout << d << "\n";

  return 0;
}

If you are just trying to move values between Python and C++, the API in this comment is now the blessed way to do that. But to answer your question, for a model like

import torch

class Model(torch.nn.Module):

    def __init__(self):
        super().__init__()
        self.w1 = torch.nn.Parameter(torch.ones(2, 2))
        self.w2 = torch.nn.Parameter(torch.ones(2, 2))

    def forward(self):
        return self.w1 + self.w2

m = torch.jit.script(Model())
torch.jit.save(m, 'model.pt')

You can iterate over the parameters like:

#include <torch/script.h>

int main() {
  auto m = torch::jit::load("model.pt");
  for (torch::jit::script::Named<at::Tensor> p :
       m.named_parameters(/*recurse=*/true)) {
    std::cout << p.name << ": " << p.value << "\n";
  }
  return 0;
}

1 Like

Tried to load it using the mentioned link . Im getting error on the line
torch::IValue x = torch::pickle_load(f);
i tried suing libtorch nightly as well as libtorch 1.7.1
error: ‘pickle_load’ is not a member of ‘torch’

torch::jit::pickle_load works