I had some issues creating tensors with a long type directly, but I’m sure that is more that I don’t know what I’m doing in c++ rather than it not working. However, I did something like the following recently for this.
vector<int> v({1, 2, 3});
auto opts = torch::TensorOptions().dtype(torch::kInt32);
torch::Tensor t = torch::from_blob(t.data(), {3}, opts).to(torch::kInt64);
Thanks for the reply. It worked I think my problem was setting its type directly to torch::kInt64 instead of setting it to torch::kInt32 and then converting it to torch::kInt64.
I ran into a similar issue and figured the problem is due to torch::from_blob not taking ownership of the vector from which the tensor has been created. Solution for this is to use torch::from_blob with clone().
For example, in the OP’s question, if the inputs are created from a vector vec in a certain scope, but used when vec is no longer in scope, then the inputs is likely to have garbage values.
So there is a way to do this without copying the vector’s data by use of move semantics and placement new. The overall idea here is that we can std::move from the vector or another object that stores data in a contiguous buffer, placement new into a managed buffer, and then use torch::from_blob with a custom deleter that will clean up the memory on tensor deletion.
For simplicity I show the C++17 approach; I’ve used this before with success.
// traits classes for PyTorch tensor type constants
template <typename T>
struct tensor_type_traits {};
// for 32-bit signed int
template <>
struct tensor_type_traits<int> {
static constexpr auto typenum = torch::kInt32;
};
// ... e.g. for double (kFloat64), float (kFloat32), etc.
/**
* Return a 1D PyTorch tensor from an STL vector.
*
* @tparam T Element type
* @tparam A Allocator type
*
* @param vec Vector to consume
* @param options Tensor creation options
*/
template <typename T, typename A>
torch::Tensor make_tensor(
std::vector<T, A>&& vec, torch::TensorOptions options = {})
{
using V = std::vector<T, A>;
// allocate storage for placement new (on exception also prevents leaks)
auto buf = std::make_unique<unsigned char[]>(sizeof(V));
// placement new + get pointer to moved vector
auto vptr = new(buf.get()) V{std::move(vec)};
// create PyTorch 1D tensor
auto ten = torch::from_blob(
vptr->data(),
{vptr->size()},
// note: argument is unused since we are deleting through vptr
[vptr](void*)
{
// take ownership of the buffer for later deletion on scope exit
std::unique_ptr<unsigned char[]> vbuf{(unsigned char*) vptr};
vptr->~V();
},
// data type determined via traits class specializations
options.dtype(tensor_type_traits<T>::typenum)
);
// we only release the buffer now in case from_blob throws
buf.release();
return ten;
}
We can use the make_tensor() function template as follows:
// some STL vector
std::pmr::vector<double> vec{4., 2.322, 2.432, 6.34, 5.343};
// create 1D tensor with gradient requirement by consuming vector
auto ten = make_tensor(std::move(vec), torch::requires_grad());
// ...
You do need to be careful that you are not resizing ten because the Tensor does not actually know anything about its underlying storage but if you just need a tensor as an input so you can call backward() and grad() this is sufficient. Of course, this can be extended if necessary.
I’ve made similar overloads for Eigen3 matrices; since they can be row- or column-major (defaulting to the latter), a bit of if constexpr is required to determine strides.