Torch.index_select op that reuses underlying storage?


Would it be possible to perform an operation like torch.index_select that does not allocate new memory ?


Not really with the strided view of tensors.

What exactly do you mean by strided view ? Sorry if it’s a silly question

It means that x[..., i, ...] and x[..., i + 1, ...] are fixed number of bytes apart. Specifically, given indices and strides for each dimension, one can calculate the pointer offset as sum_i (stride[i] * index[i]). So it is impossible to have tensor values that are arbitrarily bytes apart. It is also very inefficient if we support that.