DataLoader .to_sparse() speed-up

My collate function for my dataloader involves the construction of a sparse matrix by .to_sparse(). The .to_sparse() call is the bottleneck in the data loading process and slows down training significantly. Is there anyway to get around this? I’m thinking of saving the sparse matrix for each batch with torch.save() and then load that instead of calling .to_sparse() every time. Not exactly sure how to go about though - such that a unique hash is constructed for a batch. Thanks!