torch.Tensor.narrow

Hi,

I couldn’t understand from the documentation what the narrow method does on a tensor, the example doesn’t make sense to me :frowning_face:

More specifically, I’m trying to understand what’s going in this code (from https://github.com/pytorch/examples/tree/master/word_language_model):

def batchify(data, bsz):
    # Work out how cleanly we can divide the dataset into bsz parts.
    nbatch = data.size(0) // bsz
    # Trim off any extra elements that wouldn't cleanly fit (remainders).
    data = data.narrow(0, 0, nbatch * bsz)
    # Evenly divide the data across the bsz batches.
    data = data.view(bsz, -1).t().contiguous()
    return data.to(device)

Thanks!

2 Likes

Hi,

If you’re more familiar with advanced indexing, for a 2D tensor t2d, t2d.narrow(1, 0, 10) is the same as t2d[:, 0:10] and t2d.narrow(1, 5, 2) is the same as t2d[:, 5:7].
narrow is interesting as for higher dimensionnal tensors, you don’t have to do : for every other dimension. Also narrow() (like select()) will always work inplace: it does not use any extra memory, is very fast and any modification of the narrowed tensor will impact the original one.

10 Likes