How to copy a Python list of int into an existing pytorch tensor storage?

We know that torch.tensor([1, 2, 3, 4, 5]) will create a new tensor. If I have an existing buffer, can I make this function call directly move data there?

e.g. wha I want to achieve:

buffer = torch.empty((300,), dtype=torch.int64)
data = torch.tensor([1, 2, 3, 4, 5])
buffer[5:10] = data

However, buffer[5:10] = data involves an additional copy. I want to optimize that.

Another solution is to write a for-loop:

for i in range(5, 10):
    buffer[i] = data[i - 5]

However, I’m afraid that the Python for-loop overhead will be more than the additional copy.

Hi Youkai!

I assume that your concern is the copy involved in:

data_as_list = [1, 2, 3, 4, 5]       # a python list
data = torch.tensor (data_as_list)   # copy the list data into a (new) tensor

That is, you would like to achieve something like buffer[5:10] = data_as_list
without any additional copies. I don’t believe that pytorch gives you any way to
do this. This is because the int64s themselves are not stored in the python list
contiguously, so pytorch can’t just copy the data_as_list memory into the buffer

If data is already packaged as a pytorch tensor, this is probably the most efficient
way to go.

(If data is a python list rather than a python tensor, you probably want to create a
new python tensor and then assign it into buffer, because the for-loop overhead
will almost certainly be worse than creating the additional tensor.)


K. Frank

Thanks for the reply. I know at least one copy is required, however, the following code

buffer = torch.empty((300,), dtype=torch.int64)
data = torch.tensor([1, 2, 3, 4, 5])
buffer[5:10] = data

should have 2 copy, one for creating the tensor from python list, the other for assigning and moving to the buffer.

I’m trying to ask, if I can save the second copy, in buffer[5:10] = data .

Hi Youkai!


And yes.

Not that I am aware of.

You would want something (that doesn’t exist) like:

buffer = torch.empty((300,), dtype=torch.int64)
data_as_list = [1, 2, 3, 4, 5]
torch.create_tensor_into_specified_memory (data_as_list, specified_memory = buffer[5:10])

Some frameworks / apis do offer the ability to create a new object into a specific
memory location, but I’m not aware of anything like this in pytorch. So two separate
copy operations are needed.


K. Frank