Efficient data loading

Hello, I have a dataset that has a shape similar to an image (i.e. [N_sample, N_feature_dim, Height, Width]. I saved the dataset as a chunk which results in a single file “train.pt” with a size of more than 50 G. Because of its size, it is taking too long when I load them in the first place.

So here is my question. Is it better to save the dataset sample-wise (e.g 1.pt, 2.pt, 3.pt… N.pt) or save them into a different format other than pt? I would appreciate any form of advice!