How to append a large tensor in order to avoid cpu/gpu crashed

Hello, I am trying to append very large tensor whose dimension is (2000000, 128, 768) during for loop then store it to the disk, I tried to use append, torch.cat and also created an empty tensor first then did slicing, they all cause my code crashed. I am wondering if there are any other ways to achieve that.

This is my current code:
(1) use append:

feat = []
 with torch.no_grad():

        for test_input, test_label in test_dataloader:
            test_label = test_label.to(device)
            mask = test_input['attention_mask'].to(device)
            input_id = test_input['input_ids'].squeeze(1).to(device)

            output, feat_test = model(input_id, mask)

            feat.append(feat_test.cpu())
            
            del feat_test

torch.save(torch.cat(feat, dim=0), f'feature.pt')

(2) use torch.cat:

feat = None
 with torch.no_grad():

        for test_input, test_label in test_dataloader:
            test_label = test_label.to(device)
            mask = test_input['attention_mask'].to(device)
            input_id = test_input['input_ids'].squeeze(1).to(device)

            output, feat_test = model(input_id, mask)

            if feat is None:
               feat = feat_test.cpu()
            else:
               feat = torch.cat((feat, feat_test.cpu()), dim=0, out=feat)
            
            del feat_test

torch.save(feat, f'feature.pt')

(3) use slicing:

feat = torch.zeros(2000000, 128, 768)
batch_num = 0
 with torch.no_grad():

        for test_input, test_label in test_dataloader:
            test_label = test_label.to(device)
            mask = test_input['attention_mask'].to(device)
            input_id = test_input['input_ids'].squeeze(1).to(device)

            output, feat_test = model(input_id, mask)
            
            feat[batch_num : batch_num + test_batch_size,:,:] = feat_test.cpu()
            
            del feat_test
            
            batch_num = batch_num + test_batch_size

torch.save(feat, f'feature.pt')

Any help would be appreciated.

Still have the problem, are there any ways to solve this issue? Thanks.