How to apply L1 and l2 for resnet and overcome from overfitting

I am working on crop disease with 20k images. I have imbalance dataset, so downsampled it but when I applied resnet model on this I am getting training accuracy to 76% and val acc to 7%.
I have augmented data as well to brightness. Now the model is overfitting, what to do?

You can try to had regularization like L2 by setting weight_decay to your optimizer like:
optimizer = torch.optim.Adam(model.parameters(), weight_decay= 0.2)
I think that for the L1 you have to implement it “manually”.

You can also try to put some weights to your class proportionally to their distribution. Which kind of loss do you use ?

Have you tried to balance the class ratio between your training and validation test?

I applied for weight_decay as0.01.
I am using CrossEntropyLoss
I have downsampled as well.

Have you tried to increase the weight_decay ?

If you are using CrossEntropyLoss, you can put weights to your classes like :

weights = torch.one(nb_classes)

weights[0] = 2 #if you want to set the weight of class 1 to 2 etc...

criterion = nn.CrossEntropyLoss(weight=weights)

Have you set Dropout module ?

Actually I am new to this pytorch so, I don’t know how to put weights. I tried but got lots of error.
can you tell me what is nb_classes?

nb_classes represents the number of classes you have in your dataset.
Let say you have classes A,B,C and D, nb_classes = 4

Setting the weights could be tricky, but you can first try to set the weights proportionally to theirs appearance in the dataset to overcome the imbalance problem. If you have 20 images of A, 10 images of B and 5 images of both C and D you can try:

weights = torch.one(nb_classes) #nb_classes = 4

weights[0] = 1 #set weight to class A to 1
weights[1] = 2 #set weight to class B to 2...
weights[2] = 4
weights[3] = 4

criterion = nn.CrossEntropyLoss(weight=weights)

You have to change the code if you want to put it on GPU.

Not sure it will be help you, but you could try!

Try to increase the weight decay, and add dropout to overcome overfitting

That’s a very large discrepancy between training and validation accuracy. My guess is that it’s not from overfitting but rather a difference in the training & validation data or implementation bug. Is your data imbalance reflected in both the training and validation dataset?

My guess is also that there is not the same distribution between training and validation dataset. Both should have the same ratios.

1 Like

Initially, I was having imbalance data like, in crop disease dataset there are 15 classes and every class was having image number like 997, 152, 3500,1500, etc, so but after knowing I downsampled it.
Now I have balanced data with 80% in train ie downsampled: 9906, ds_valid: 2470.
On this, I am getting a training score as 56% and val acc at 7%.

How would I get weights of single class, I am not able to calculate that.

Ok so you have 15 classes, each of these classes have corresponding numbers which indicates the corresponding images like:

class 1 : image 1, 10 , 100, 256, 3500 etc
class 2: image 2, 3

class 15: image 5, 240 etc

Am i correct ?

In this case, have you checked how many images of class A do you have in both training and validation set? You should have the same ratio between them, If in training set you have 1000 images of class A (which represent 1/10 of the training set as you have ~10000 images within, you should have ~250 images of class A in the validation set because you have ~2500 images within)

Class 1 images are 997
Class 2 images are 1478
class 3 images are 1000 like this. This is before downsampling.
After downsample, I have taken 952 images in 13 classes.

How many class 1 images have you in your training dataset ? How many class 1 images have you in your validation dataset ?
And for all other classes ?

In training its 762 images and in val its 190.

You have to check for every class.

Make sure to keep the same ratio between the two sets. For example if you have a training set that contains whole class 1,2,3,4,5 and a validation that contains whole class 6,7 it could lead to bad classification

I have equal number of training images with 762 images in every class and 190 images in every validation classes.

Ok so it seems that you have same ratio on both sets and that the classes have the same weights (because they have equal number of images in total).

Are you doing a cross validation on your training dataset during training ?

Have you tried to increase the weight decay ?

Yes, I am doing cross-validation on the training dataset.
I had taken weight decay as 0.2 but still, I guess no improvement.

Can you help me with, how to calculate weight?

Hello, sorry for the late answer.

In your case, if I understand it well, you have equal number of images for every class, so the weights should be uniform (like weight of 1 for every class, which is the weights by default using cross entropy loss).

It is difficult to debug it further without you showing some code of your training and model.