I use torch.expand
on a big tensor:
tensor = tensor.unsqueeze(1).expand(-1, 1000, -1)
and then I would normally change the view so that the first 2 dim becomes one batch dim, e.g.,
new_tensor = tensor.view(tensor.shape[0] * 1000, -1)
,
but I can’t do this because torch.expand
requires contiguous shaping.
However, using torch.reshape
throws me a CUDA out-of-memory error on account of the new contiguous copy! Is there any way I can batch process tensor
without creating a memory-expensive copy?
My goal is to process new_tensor
in a module with batch processing.