Is it possible to treat the first 2 dims as batch dims without changing view or reshaping?

I use torch.expand on a big tensor:

tensor = tensor.unsqueeze(1).expand(-1, 1000, -1)

and then I would normally change the view so that the first 2 dim becomes one batch dim, e.g.,

new_tensor = tensor.view(tensor.shape[0] * 1000, -1),

but I can’t do this because torch.expand requires contiguous shaping.

However, using torch.reshape throws me a CUDA out-of-memory error on account of the new contiguous copy! Is there any way I can batch process tensor without creating a memory-expensive copy?

My goal is to process new_tensor in a module with batch processing.

If depends on the used module if contiguous tensors are needed and depending on the used operation you could check, if a slower but more memory-efficient alternative could be used.