Is advanced indexing without data copy possible to implement in C++?

I heard arguments that it betrays the philosophy of pytorch, that it’s hazardous, etc … My use case is niche but absolutely valid, I need to be able to manipulate a tensor in more complex ways that are possible with the python front end. In particular I would like to be able to do what index_select does (uses a LongTensor of indices to slice a tensor) without copying the data into a new tensor. Is such a thing feasible without delving into the Aten code or elsewhere in the C++ backend?

I don’t know what kind of PyTorch philosophy would be betrayed, but the libtorch indexing can be found here with their Python counterparts.

Thanks for the heads up, I could then create an extension accomplishing what I’ve set out to do basing my new indexing function on those you provided?
I’ve been looking at the C++ backend and as one should expect for such a library things are complex and interconnected, I wasn’t sure where I could get the low level access to tensors required to do advanced indexing without copying data into a new tensor.
What I want is a customized view of a tensor but the very structure of pytorch tensors (of the little I have actually read from the C++ code) with the stride looked to me as if they presupposed creating views with a regular stepping through the tensor. If I’m wrong I’m glad I am. I’ll look into the resources you provided.
Thanks.