How do I concatenate 1.2 million 2 dimensional tensors into a single tensor?

So, I have a list of 1.2 million tensors that I’d like to concatenate into a single tensor. When I use torch.cat on the list of tensors, I get an out of memory error. Each tensor has a shape of [1, n_channels]. Ex: torch.Size([832, 1])

Traceback (most recent call last):
  File "cat_samples.py", line 10, in <module>
    torch.save(torch.cat(samples, 1), "1m_samples.pt")
RuntimeError: CUDA out of memory. Tried to allocate 3.97 GiB (GPU 0; 8.00 GiB total capacity; 4.28 GiB already allocated; 2.44 GiB free; 4.28 GiB reserved in total by PyTorch)

How can I concatenate my list of 1.2 million 2 dimensional tensors into a single tensor?

So, it looks like I can avoid an out of memory error by moving the tensors to the CPU before using torch.cat.