I need to copy a row of one tensor into some part of another tensor, form which the begin and end indexes are available. Withing C++ we can use something like
std::vector<int> myvector (18);
std::copy ( myints, myints+3, myvector.begin()+4 );
I can copy three values from
myvector, starting at the fourth index.
I was wondering if there is a similar API in
For CPU Tensors, I think that using the
data<scalar_t>()(for contiguous tensors, otherwise you should use the
accessor<scalar_t, dim> API) method along with
std::memcpy can give near to optimal results.
I do not want to move the data out of torch tensors. Basically, I am looking for torch APIs to perform this operation directly using the tensors.
I’m not sure what you mean by ‘move data out of tensors’. You won’t be moving that data out of torch tensors, the data and accessor APIs just allow you to access the data from tensors.
my_target.select(0, 1).copy_(my_source.slice(1, 0, 10) would be the equivalent of
my_target = my_source[:, :10] in Python (works if the dimensions these sub-tensors are compatible).
Typical methods for selecting are
narrow. You can chain those.