Converting a torch::Tensor to a std:vector

I’m writing a shim to use a C++ function with PyTorch and would like to know how to convert torch::Tensor to a std:vector<int32_t>?

float f_cpp(std::vector<int32_t>& result);

float f(torch::Tensor results_t) {

  //  std:vector<int32_t> <-- torch::Tensor?
  std::vector<int32_t> result = result_t.data_ptr<std::vector<int32_t>>; 

  return f_cpp(*result);
}

The problem here is that a std::vector can’t use “foreign” memory.
Some avenues that might work:

  • Use a pointer (int32_t* ) as an array or a ArrayRef<int32_t> (available in c10). In this case you need to keep the tensor allocated while you are using them. Also note that you need to be a bit careful with strides if your tensor can be non-contiguous.
  • Allocate the memory in the vector and then use from_blob to get a tensor. In this case you need to keep the vector around while using the tensor.
  • Copy the data.

Best regards

Thomas

Hey Thomas,

Thanks for replying, I’m not a C++ developer so I need a little bit more clarification.

I’m only dealing with 1d tensors so if I go with int32_t* am I on the right track with -

auto r_ptr = result_t.data_ptr<int32_t>();
std::vector<int32_t> result{r_ptr, r_ptr + result_t.?};

If so, how do I get the size of the torch::Tensor result_t?

Regards,

I’d recommend result_t.size(0)

1 Like