Creating index tensor inside model

I’m using advanced indexing in the forward pass, where an indexing tensor need to be created:

torch.arange(batch_size).view(-1,1)

By default it seems to be created in cpu, but do not cause any error. Should i specify the device on which it is created, will it then be faster? If so how can I make it in the current device (same as the model)? Or is there a more efficient way to do this?