Pipeline for working with tabular (CSV) data

I’d like to train on a tabular dataset (CSV), but I’m not sure the best way to turn the pandas dataframe into a PyTorch dataset. With image datasets, I simply use torchvision.datasets.ImageFolder to create a PyTorch dataset directly from my data directory. Then I can use torch.utils.data.random_split to split into train, validation, and test sets. I would like to follow a similar workflow for CSV files, but all the tutorials I’ve seen use Scikit-learn to split the data first and apply normalization, then create a custom PyTorch dataset class… why isn’t there a way to do this without scikit-learn or custom dataset classes, similar to the way I was working with images?