Is it safe to get raw tensor data by tensor.flatten().data_ptr<T>()?

I want to get raw data of tensor and return an array, currently I’m doing like this:

...
torch::Tensor flatten_tensor = input.to(torch::kCPU).flatten();
tensorSizes = output.sizes();
*arrayProto.mutable_float_val() = {flatten_tensor.data_ptr<float>, flatten_tensor.data_ptr<float> + tensor_size};
...
torch::Tensor output = torch::empty(torch::IntArrayRef(tensorSizes), tensorOptions);
memcpy(output.data_ptr<float>(), arrayProto.float_val().data(), tensor_size*sizeof(float));
...

I’m not sure:

  1. Whether the tensor input.to(torch::kCPU) is contiguous in memory? (input may in cuda memory or cpu memory)
  2. If it’s contiguous, will the flatten_tensor return the right values of the tensor?
  3. Can I get the right value (same as input) in output? Is there any risk in my code ?

I tried some simple models and tensors by this and it works. But I’m worrying it’s not 100% work in any scenario.

Thank you very much!

As you said, the problem with this approach is the possible non-contiguity of the tensor representation in memory. You can test it with the is-contiguous tensor member function. If your vector is non-contiguous, you can use the strides member function to get for each tensor dimension the required number of “jumps” between each element of the tensor.

The safest way to achieve constructing a new tensor from the data_ptr of another, would in my opinion be :

auto new_tensor = torch::from_blob(input.data_ptr(), input.sizes(), input.strides(), torch::TensorOptions(torch::kFloat));