Efficient Memory Management: How to manage and store a tensor of shape [1000, 1 billion] efficiently without crashing workstation

I’m Actually dealing with the combinations, so i easily shoot up my columns upto 1 billion and i’m not sure how to handle this tensor efficiently without crashing my workstation.

combinations1 = list(combinations(self.columns,2))
combinations2 = torch.combinations(torch.arange(self.df_feature_values.shape[1]),2)
comb_tensor = self.df_feature_values.T[combinations2,:]

NOTE: I don’t want to do the chunking operation which will add overhead time to execution and i don’t want to decrease the dtype from float 32 to float16.

I know it’s lot of constraints but if there’s any way to handle this problem then help is much appreciated.