How to split unbalanced dataset into training/validation/test sets and train model?

Hi all! I’m working on a project which involves a lot of camera trap images. My goal is to try out training several different models with the dataset, but unfortunately, it’s super unbalanced. Some classes can have 30 images, while other classes can have 6000 images. I’m pretty new to PyTorch, but I’ve gotten as far as setting up my Dataset class to load in all the images and their labels as follows:

class CameraCatalogueDataset(Dataset):
    Camera Catalogue dataset.
    def __init__(self, csv_file, data_folder, transform=None):
            csv_file (string): Path to the csv file with annotations.
            data_folder (string): Directory with all the images.
            transform (callable, optional): Optional transform to be applied
                on a sample.
        self.labels_frame = pd.read_csv(csv_file)
        self.data_folder = data_folder
        self.transform = transform

    def __len__(self):
        return len(self.labels_frame)

    def __getitem__(self, idx):
        img_name = data_folder / self.labels_frame.iloc[idx, 0]
        img = io.imread(img_name)
        img_label = self.labels_frame.iloc[idx, 1]
        sample = {'image': img, 'label': img_label}

        if self.transform:
            sample = self.transform(sample)

        return sample

What I want to do now is to be able to split up all these images into training/validation/test sets in a balanced manner so I can train some models with them. I’ve checked out a couple links as follows which address parts of my problem:

How to augment the minority class only in an unbalanced dataset (goes over augmenting the minority classes in an unbalanced dataset)

and Is there a better way to split data and deal with an unbalanced dataset? (which goes over a way to split data in an unbalanced set)

Is there a way to combine these approaches, or are there better approaches entirely? Any help would be appreciated!