I guess an easy solution is to physically partition your file.
another options is to read the file as incoming byte streams, stop reading when you reach desired size, and resume reading once you need more data.
the tricky part about reading your data as a byte stream is that there is no data structure.
let say x = (BxCxWxH), then your bytes stream has a size of x = (B * C *W * H). Then you must know the type of your data, i.e int, float, double …etc, and figure out of many bytes each type are to read correctly.
here is some code to get you started
f = open(<your file path>, 'rb') # rb is readbytes.
# let assume your matrix is int_8. i.e 8 bits or 1 bytes per int
# loading one Batch
batch = np.frombuffer(f.read(1 * C*W*H), dtype=np.uint8)
batch = batch.reshape((C,W,H))
tensor_batch = torch.from_numpy(batch)
# Im writing this code on the fly. might have some bugs... but you should get the idea.