How to avoid overfitting

I am implementing logistic regression using pytorch. I am getting 100% accuracy for training. To reduce overfitting, I used PCA and scaled the data ,
changed the shapes and ran the program again… But still I am getting same 100% accuracy !!

Iteration: 100. Loss: 0.8057927637150751.Correct:601. total:601. Accuracy: 100.0.
Iteration: 200. Loss: 0.7555791968181238.Correct:601. total:601. Accuracy: 100.0.
Iteration: 300. Loss: 0.8404111710965548.Correct:601. total:601. Accuracy: 100.0.
Iteration: 400. Loss: 0.7050717906166214.Correct:601. total:601. Accuracy: 100.0.
Iteration: 500. Loss: 0.7241441765733534.Correct:601. total:601. Accuracy: 100.0.
Iteration: 600. Loss: 0.8043209660671171.Correct:601. total:601. Accuracy: 100.0.
Iteration: 700. Loss: 0.6430073846611701.Correct:601. total:601. Accuracy: 100.0.
Iteration: 800. Loss: 0.7107452274836563.Correct:601. total:601. Accuracy: 100.0.
Iteration: 900. Loss: 0.7869175416749621.Correct:601. total:601. Accuracy: 100.0.

suggestion to overcome overfitting is is appreciated.

How are your classes distributed and how many classes do you have?
Also, what’s the validation and final test accuracy?

I have only two classes (class 1 and class 0)

I have 947 points in class “1” and 2980 points in class “0”

Thanks for the information.
Did you check the validation and test accuracy?

Thanks @ptrblck , thing is that I am getting training accuracy as 75.884 % and Validation accuracy as 100% quite weird !!

Could be because of bad train-val split. Make sure the data distribution remains same across train and val splits.