Unconverging loss for a simple regression task

I am working on a project to predict soccer player values from a set of inputs. The data consists of about 19,000 rows and 8 columns (7 columns for input and 1 column for the target) all of numerical values. Here is a glimpse:

I am using a fully connected Neural Network for the prediction but the problem is the loss is not decreasing as it should. This is the result I get:

This is how I am creating the dataset and the dataloaders:

This is the code I am using to run the model:


I tried changing the learning rate but it is still bad.

What could be the reason behind this? Thank you!

Is ValueEUR the target variable? It seems to have values in 10e7 . Thus, the loss can be pretty big if it is used directly (without scaling).

The design of the network seems a bit odd, as usually in fully connected models there is a projection to a higher dimensional space (but here the maximum dimension is 7 throughout the model). You might want to try a simple sanity check which is to just use a very small subset of the dataset (e.g., two data points) and use it as both the train and test set to verify that you can easily overfit a small amount of data which would rule out trivial bugs in the training loop. After that you could try increasing the size of the hidden layers of the model or other common techniques like normalizing the dataset (though in theory linear layers with ReLU activations should be able to “learn around” unnormalized data).