Dataset values become extremely large after loading from Pandas

I’m trying to load data from my dataset to the DataLoader class, but when I do, the values explode. None of the numbers in my CSV are anywhere near this large, and as I train, the values become even greater (around 8*e19). How can I avoid this?

Dataset Class:

class myData(Dataset):
    def __init__(self, 
                 path,
                 train_size: float = 0.8,
                 val_size: float = 0.1,
                 test_size: float = 0.1):
        self.dataframe = pd.read_csv(path)

        self.train_dataset = self.dataframe[0:int(len(self.dataframe) * train_size)].values
        self.test_dataset = self.dataframe[int(len(self.dataframe) * train_size):
                                           int(len(self.dataframe) * (train_size + val_size))].values
        self.val_dataset = self.dataframe[int(len(self.dataframe) * (train_size + val_size)):
                                          int(len(self.dataframe) * (train_size + val_size + test_size))].values

    def get_train_dataset(self):
        return self.train_dataset

    def get_test_dataset(self):
        return self.test_dataset

    def get_val_dataset(self):
        return self.val_dataset

    def __len__(self):
        return len(self.dataframe)

    def __getitem__(self, idx):
        return self.dataframe[idx]

DataLoader:

from torch.utils.data import DataLoader
....
train_loader = DataLoader(train_dataset,
                          batch_size=batch_size,
                          shuffle=False,
                          num_workers=0)

The input tensor for training:

tensor([[1.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.4504e+02, 4.0524e+02,
         4.0524e+02],
        [2.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.4504e+02, 4.0524e+02,
         4.0524e+02],
        [3.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.4504e+02, 4.0524e+02,
         4.0524e+02],
        ...,
        [1.4080e+03, 0.0000e+00, 0.0000e+00,  ..., 1.4504e+02, 3.9697e+02,
         4.0524e+02],
        [1.4090e+03, 0.0000e+00, 0.0000e+00,  ..., 1.4504e+02, 3.9697e+02,
         4.0524e+02],
        [1.4100e+03, 0.0000e+00, 0.0000e+00,  ..., 1.4504e+02, 3.9697e+02,
         4.0524e+02]], device='cuda:0', dtype=torch.float64)

If the parameters of the model are diverging, you might want to try decreasing the learning rate.

Do you mean the CSV file reading is not working as intended?

Thank you for your reply, I looked into this some more. I think I need to normalize my input prior to sending it into the model

I normalized my data between 0 and 1. This fixed the issue.