I am trying to do implement a research paper on image segmentation. Following is the preprocessing steps from the paper, Can anyone check and verify whether I am implementing correctly the preprocessing and dataloading steps or not

A validation split of 15% is selected

Random crops of size 512 × 512 are extracted randomly out of the original images ,We opt for a dynamic augmented data set, where training samples are generated randomly at the start of each minibatch.

We artificially grow our data set by a factor of 8 through rotation at 90, 180 and 270 degrees and horizontal flips

We have implemented elastic deformation by sampling control points on a regularly spaced 100 × 100 grid. Each control point has isotropic Gaussian noise added with σ = 20
Following is my code for the above task
data_transform = transforms.Compose([transforms.RandomCrop((512,512)),
transforms.ToTensor(),
transforms.RandomRotation([+90,+180]),
transforms.RandomRotation([+180,+270]),
transforms.RandomHorizontalFlip()
])
dataset = datasets.ImageFolder(path_dir,transform = data_transform)
validation_split = 0.15
shuffle_dataset = True
random_seed = 42
dataset_size = len(dataset)
indices = list(range(dataset_size))
split = int(np.floor(validation_split*dataset_size))
if shuffle_dataset:
np.random.seed(random_seed)
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
valid_sampler = SubsetRandomSampler(val_indices)
batch_size = 4
num_epochs = 10
iter_per_ep = len(train_sampler) // batch_size
train_loader = torch.utils.data.DataLoader(dataset, batch_size = batch_size,sampler = train_sampler)
valid_loader = torch.utils.data.DataLoader(dataset, batch_size = batch_size,sampler = valid_sampler)