Efficient way of saving list of tensors with variable sizes

What is the most memory/loading efficient way to save a list of tensors of variable size (e.g., variable length of sentences)?

For example, I have a list of ~60k tensors. The sum of memory of each tensor is 17M. But when I save the list of tensor into *.pt file, it occupies 31M memory (whereas when saved as one tensor by content them all it only cost 17M memory). Is there a way to save it more efficiently and also enable efficient loading during training?

Thank you!

Good question. I don’t know, but just giving an idea (which might be wrong). I’d guess the reason why it goes from 17 M → 31 M is due to all the padding necessary to make it “1” tensor. So then, maybe you can sort the tensors by length and create batches of similar length? This would add minimal padding per batch. The idea is that you have T1 + … T_60K = 17 M but as one tensor you get 31 M. But, if you batch the right way you’ll get B_1 + … + B_k ~ 17 M still and |B_l| >> 1, i.e. your batches are much larger than 1. Maybe there’s a better way, but unsure; I’d experiment …