Appending to a tensor

Is there a way of appending a tensor to another tensor in pytorch? I can use x = torch.cat((x, out), 0) for example, but it creates a new copy of x which is time-consuming.
Thanks!

6 Likes

it’s no that time-consuming.

2 Likes

torch.cat is super efficient, and basically bandwidth bound. it’s not time consuming.

9 Likes

torch.cat is fast when you are doing it once. But if you are preparing data and doing cat in each iteration, it gets really slow when the tensor you are generating gets very large. My solution was to cat into a temp tensor and move it to the real tensor every N iterations. Not sure if there is a more elegant solution to this.

1 Like

I need to transform each input tensor in the following manner, for each iteration:

#input_batch shape: (64, in_channels, 224, 224)
outputs = []

for ch in range(in_channels):
    tensor = my_fn(input_batch[:,ch,:,:])   #transform from (64, 1, 224, 224) to (64, 32, 224, 224)
    outputs.append(tensor)
result = torch.cat(outputs, dim=1)  #shape (64, 32*in_channels, 224, 224)

in_channels is typically 3, but can be more.

Is appending to list better than doing torch.cat incrementally inside the loop?

9 Likes

Dear smth, is there anyway i can concatenate list tensors inside the loop without using stack/cat?

This the example of my case: I want to manipulate output, but because it is a list of tensors i want to change it to one tensor. So as the loop becomes too long the memory becomes full because the stack operation.

output = []
start=time.time()
for i in range(2):
        hx= rnn(input[:,i,:], (hx_0))
        output.append(hx)
        outs1 = torch.stack(output,1) 
         hx_0=operation(outs1)

print(output)

@holiv you can pre-allocate a larger Tensor, and then just copy into a slice.

max_output_size = 10
output_cat = None

start=time.time()
for i in range(2):
        hx= rnn(input[:,i,:], (hx_0))
        output.append(hx)

        if output_cat is None:
            output_cat_size = list(hx.size())
            output_cat_size.insert(1, max_output_size)
            output_cat = torch.empty(*output_cat_size, dtype=hx.dtype, device=hx.device)

        output_cat[:, i] = hx
        hx_0 = operation(output_cat[:, 0:i])

print(output_cat)
5 Likes

Dear @smth thank you for your reply. However, output_cat does not contain anything . Because my operation have inside unsqueeze, then error is thrown cannot unsqueeze empty tensor . I addtion, I don’t see the connection between both output_cat and output.append. Also, if i try to replace output.append with output_cat.append, the error says that '‘NoneType’ object has no attribute ‘append’ .
Thank you.

What if we want to concate two inputs?

outputs = []

for i in range(in_channels):
for j in range(in_channels):
tensor = torch.cat(input_batch1[:,i,:,:], input_batch2[:,j,:,:], dim=1) #transform from (64, 1, 224, 224) to (64, 32, 224, 224)
outputs.append(tensor)
result = torch.cat(outputs, dim=1)

Hi, Is there a way to declare output = [] in your code as torch.tensor and append in it similar to python numpy list?

1 Like

Hi @michaelklachko, I am trying to do something like that though I am not sure (I am new in this field) and trying to learn. I wrote something like that (below) and it took almost 2H (hanged) and I have to shut down my laptop (forced shutdown). Is it ok to write something like that?
I have a test_dataloader and it contains 14000 tensors. My test images are 28000 and I am taking batch = 2. I have a pre-trained model and now I am trying to test my existing model.

final_output = []
for i, data in enumerate(test_data):    //test_data is a DataLoader
    data = data.unsqueeze(1)
    output = model(data)
    final_output.append(output)
result = torch.cat(final_output, dim=1) 

Could you tell me what I have to do?

2 Likes

Try removing output = model(data) line and see how fast it goes

Like this one?

# note that a is in dtype float32 (by default)
# the one to be cat has to be also 
# with dtype of float32.
a = torch.tensor(())
for i in range(3):
    # if i = torch.tensor(1)
    # it cannot be cat, since it has 
    # zero dimention.
    # Also use .float() to make sure that they 
    # are in the same dtype
    i = torch.tensor([i]).float()
    a = torch.cat((a, i), 0)
print(a)
tensor([1., 2., 3.])
1 Like