Creating custom point cloud dataset by passing in csv file with path to input point cloud and path to output point cloud

Currently, I have a csv file, where one column is the input paths to point clouds, and the other column is outputs to points clouds. Here is the dataset: https://drive.google.com/drive/folders/1HLj8trab5uigxVBu5gzkeWQK1W0HTsG0?usp=sharing.
I wrote this code to load in the input point cloud and output point cloud for each single pair, but currently, for each pair, the array is overwritten, so I might need to append to the array, instead of overwriting. Each point has its x, y, and z coordinates, which are stored in the 3 separate layers, though, I hope to reshape to work with the data in 3d convolutions.

import pandas as pd
import numpy as np

df=pd.read_csv('train.csv', sep=',', usecols = ['input','output'])
shape = df.shape

for current in range (shape[0]):	
	input_pc = np.loadtxt(df.iloc[current,0], delimiter=' ')
	print(input_pc.shape)
	output_pc = np.loadtxt(df.iloc[current,1], delimiter=' ')
	print(output_pc.shape)

Currently, here is my dataloader code:

train_loader = DataLoader(MyDataset("./train.csv"), batch_size=16, shuffle=False, num_workers=0, worker_init_fn=None)

How would I adapt my code so that I can load a txt file, convert it to a 3d matrix, which each separate file being a new point cloud, use the csv file to determine the path off input and output, and load it in so that each input output pair is one batch? Effectively, I’m trying to create a standard autoencoder, but, I’m trying to use point cloud instead of the standard image, I would like to be able to specify a different output file than input file.