read too big csv, kernel died

when read big csv, it takes all memory then the jupyter lab kernel died.
I tried to splits the training dataset into 2 csv, reading the first csv succeeded, but when reading the 2nd csv, it died again.
Is there any method to use TabularDataset as generator?