Hi! Is it possible to use the fancy indexing in PyTorch without returning a copy of the indexed tensor?
For example I have a large 3D tensor X [100,100,100] and want to extract multiple (say 10) [50,50,50] slices to get the resulting tensor X_sliced [500,50,50,50]. Currently, I use fancy indexing as X_sliced = X[mask_0+borders_0, mask_1+borders_1, mask2+borders_2], where masks and borders are indexing int64 tensors. However instead of view with zero additional memory footprint this method returns copied slices significantly increasing the memory footprint.
Is there a way to do this kind of indexing without using additional memory?