Converting 1-d float vector to (C,H,W) tensor in C++

Given a 1-d array in python labelled array, I can convert the array into a tensor using the following code

tensor = torch.FloatTensor(array) # array with normalised pixel values
tensor = torch.reshape(tensor, (3, 320, 320))
model([tensor]) # run model inference

How would I convert the 1-d array into a tensor in c++ with (C,H,W) dims given that I have a pointer to the base array address ?

P.S
I have tried at::Tensor tensor = torch::from_blob(array, { 3, input_width, input_height }, at::kFloat); however I believe the tensor output is different from that in python because the model inference in c++ shows differently.

from_blob is right, but you have to make sure

  1. that your array element datatype is indeed float
  2. using from_blob requires you to keep the pointer array alive for the lifetime of the returned tensor. If that is a problem for you, you might try copying…
  3. I’d probably print the converted tensor, too, just to be sure. I have stumbled over OpenCV giving me BGR in C++ while my model expected the RGB I got in Python and so.

Best regards

Thomas

P.S.: Nowadays we use torch.tensor in Python instead of the constructors.

Thank you for your help @tom

@tom do you mind explaining what the phrase “without taking ownership of the original data.” means? from the torch::from_blob docmentation.

This is what I meant with

using from_blob requires you to keep the pointer array alive for the lifetime of the returned tensor.

So you pass in a pointer to from_blob but PyTorch “not taking ownership” means that PyTorch doesn’t know where it comes from or tries to do anything with it. This means that you as the caller are responsible for the lifetime of the memory blob:

  • making sure the memory is alive while PyTorch has references to the tensor (e.g. if you have a local std::vector<float> v and you use from_blob on its v.data(), you run into trouble if v goes out of scope while the tensor is still in use. (On the other hand, if you do copying with .clone() or computation, like /255., and don’t use the original anymore, you are OK.
  • deallocating the memory eventually. PyTorch can help you by signalling when it is done with the tensor returned by from_blob if you pass in a deleter .

Best regards

Thomas