How to handle overfitting properly

I am training a CNN regression model with a 34560 size dataset. I have already got a training error rate less than 5%, but the validation error rate is over 60%. It seems to be an overfitting problem. And I tried the four ways to solve the problem, but none of them works well:

  1. Increase the dataset size
  2. Reduce the model complexity
  3. Add a dropout layer before the output layer
  4. Use L2 regularization / weight decay
    Probably I did not use them in the right way. Can someone tell some details of these methods? Or are there any other ways to solve the overfitting problem?

If you always send the training data into the model the same way, you’ll likely have overfitting.

Augmentations are a way to “extend” your dataset and build a more robust model. This is basically done by altering the images on the fly with crops, filters, flips, rotations, etc.

See here:
https://pytorch.org/vision/main/transforms.html

And here:

Yes, data augmentations work well for classification. But my task is a regression one, and I never tried augmentations on regression. Could you explain more about data augmentation on regression tasks?

The appropriate augmentations to use will be determined by the target of the regression model. For example, if your target was to find the distance measured between two points, rotation, hue, brightness, etc. would be appropriate to augment. But you probably shouldn’t crop or sheer.

The question ought to be, will this type of augmentation materially change the target? Nevertheless, it’s still a good idea to incorporate appropriate forms of augmentation. It prevents your model from memorizing irrelevant features in the data.

My task is to simulate an optical grating. The inputs are 8 coefficients and a 128x128 binary image, and the outputs are 80 numbers.
First, I extend the 8 coefficients into a 128x128 matrix. Then I apply CNN and average pooling twice on both the matrix and the binary image, and concatenate both to get 64x32x32 tensor. After 5 times CNN and average pooling on the tensor, I flatten the tensor and apply a FC layer to get 80 outputs. This is basically what my model is like.
So what kinds of data augmentation do you recommend for my model?

Ask yourself this:

  1. If I performed ______ augmentation on the image, would I still be able to determine what the 80 outputs(targets) should be(with sufficient training)?
  2. And would the targets still be the same?

Take rotation, for example. Would rotating the image n degrees alter what the targets should be? If not, then this would be a good target augmentation. Rotations alone can multiply your dataset size by 360.

Then go through each augmentation and determine if it would satisfy those conditions.