I have a 4D tensor of shape NxCxHxW allocated on GPU memory.
I want to take some of the elements (along the N dimension), and expand them without allocating new memory (i.e., repeating each element n times). I can do this for all elements as follows:
batch.unsqueeze(0).expand(n, -1, -1, -1, -1)
Is there a way to do this but only to selected elements from the batch? (for instance, only for the first element of the batch, and still without allocating new memory).
Example for how to do this with memory allocation:
torch.cat((batch.unsqueeze(0), batch[:num_elems_to_expand].unsqueeze(0).expand(n, -1, -1, -1, -1)), dim=0)