# Concat two template tensors multiple times in a specific order

I want to concatenate two same-size tensors, let’s say A and B. The desired output would be something like
C = [A, B, B, A, A, A, B] etc.

• the order is specified by a condition
• the resulting tensor will not be attached to the autograd graph

The following code is working:

``````C = torch.stack([A if condition else B for condition in conditions])
``````

Is there any other option to achieve this? Specifically, I’d like the idea of creating a “view” of the resulting tensor given `A`, `B`, and `conditions`.

Using indices or a mask should also work, but you wouldn’t get a view and I don’t know if any approach could satisfy a view in this case:

``````A = torch.randn(4)
B = torch.randn(3)
C = torch.empty(len(A)+len(B))

mask = torch.tensor([True, False, False, True, True, True, False])
``````

This is interesting! Thank you for the suggestion!
So in both these approaches we fill a new tensor C using A and B values.

In my use case, I need C for a later matrix multiplication so I’d like to use the additional dim resulting by `stack` (you can think of it as a batch dimension).

``````A = torch.randn(3,2)
B = torch.randn(3,2)
conditions = torch.tensor([True, False, False, True]) # size (4)

C = torch.stack([A if c.item() else B for c in conditions], dim=0) # size (4,3,2)

D = torch.randn(4,2,5)

E = torch.matmul(C, D) # size (4,3,5)
``````

Is there a way to create an ephemeral C that uses A and B storages just once? It would be great avoiding the allocation of the whole C tensor.