Memory issue when concatenating with torch.cat()

In the below code i am converting the input data into embedings using a pre-trained BERT model. Since I only require the pooler output which is a vector of length 768 and performing a loop throughout the data and concatenate it into the passage_vectors tensor. But this causes the memory to run out after 15-20 data points. I tried running the program after commenting out the torch.cat() statement, which worked completely fine.

passage_vectors = torch.tensor([])
for i in range(len(dicts)):
    preprocessed = preprocessor.process(dicts[i])
    for j in range(len(preprocessed)):
      passage = preprocessed[j]['text']
      passages.append(preprocessed[j]['text'])
      ctx_tokens = context_tokenizer(passage, return_tensors = "pt", padding='max_length',truncation=True)
      ctx_vector = context_model(input_ids = ctx_tokens.input_ids, attention_mask = ctx_tokens.attention_mask)['pooler_output']
      #passage_vectors = torch.cat((passage_vectors,ctx_vector), 0)

There are around 2000 input data. So the final output tensor would be of dimensions 2000x768 which shouldn’t take up that much memory. I am running this in Google Colab which has 12gb RAM. Is there something is can do to make it more memory efficient.

Hi,

If you do this on Tensors that require gradients, the autograd state might take quite a bit of memory as all the intermediary result might be saved (you go from linear to quadratic).

Also why do you concatenate inside the loop? Why not just append to the list inside the loop and concatenate once at the very end?

Yes, I was able to solve it by using detach() on ctx_vector.
I did try appending at first, this was another approach I did to check whether it would solve the memory issue.