Transfer Learning help me

Hello. I want to transfer learning with pytorch but my error rate is increasing and decreasing abnormally and my accuracy rate is not increasing. Am I doing something wrong? Yield is 19201080 but due to lack of memory there are 2828 and formula 1 car images

note=The ready models and weights I use are not suitable for my dataset. what do you think

import torchvision.models as models
    
model = models.resnet152(pretrained=True) # Önceden eğitilmiş modelleri kullanmak için pretrained=True 

for param in model.parameters(): # Tüm modeli eğitmek için True
    param.requires_grad = False # Parametreleri donduruyoruz

num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 100)
model.fc2=nn.ReLU()
model.fc3=nn.Linear(100,20)
model.fc4=nn.ReLU()
model.fc5=nn.Linear(20,10)
model.fc6=nn.ReLU()
model.fc7=nn.Linear(10,4)

error = nn.CrossEntropyLoss()
optimizer = optim.Adamax(model.parameters(), lr=0.0001)
num_epochs=20
count=0
losses = []
iterasyon=[]

for epoch in range(num_epochs):

    for i,(images,label) in enumerate (train_loader):
        
        out = model(images)
        loss = error(out, label)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        losses.append(loss.item())
        
        count+=1
        
        if count % 200 == 0:
            
            total=0
            correct=0
            correct_hata=0
            
            for image,labels in test_loader:
                
                out=model(images)
                
                pred=torch.max(out.data,1)[1]
                
                total+=len(label)
                
                correct+= (pred==labels).sum()
                correct_hata+=(pred!=labels).sum()
                
            dogruluk=100*correct /float(total)
            hata=100*correct_hata /float(total)
            
            losses.append(loss.data)
            iterasyon.append(count)
            
            
        if count % 200 == 0:
            print('Iteration: {}  Loss: {}  Accuracy: {}% Error: {}%'.format(count, loss.data, dogruluk,hata))
        ```
result:

Iteration: 200 Loss: 1.5698899030685425 Accuracy: 27.848100662231445% Error: 72.15190124511719%
Iteration: 400 Loss: 1.530423879623413 Accuracy: 26.582279205322266% Error: 73.417724609375%
Iteration: 600 Loss: 1.5104633569717407 Accuracy: 26.582279205322266% Error: 73.417724609375%
Iteration: 800 Loss: 1.3361421823501587 Accuracy: 27.848100662231445% Error: 72.15190124511719%
Iteration: 1000 Loss: 1.297659158706665 Accuracy: 20.253164291381836% Error: 79.74683380126953%
Iteration: 1200 Loss: 1.3579907417297363 Accuracy: 27.848100662231445% Error: 72.15190124511719%
Iteration: 1400 Loss: 1.1254953145980835 Accuracy: 25.316455841064453% Error: 74.68354797363281%
Iteration: 1600 Loss: 1.9074203968048096 Accuracy: 25.316455841064453% Error: 74.68354797363281%
Iteration: 1800 Loss: 1.3322985172271729 Accuracy: 27.848100662231445% Error: 72.15190124511719%
Iteration: 2000 Loss: 1.3254060745239258 Accuracy: 25.316455841064453% Error: 74.68354797363281%
Iteration: 2200 Loss: 1.1631817817687988 Accuracy: 20.253164291381836% Error: 79.74683380126953%
Iteration: 2400 Loss: 1.2752320766448975 Accuracy: 20.253164291381836% Error: 79.74683380126953%
Iteration: 2600 Loss: 1.3860232830047607 Accuracy: 27.848100662231445% Error: 72.15190124511719%
Iteration: 2800 Loss: 1.2704166173934937 Accuracy: 20.253164291381836% Error: 79.74683380126953%
Iteration: 3000 Loss: 1.081123948097229 Accuracy: 25.316455841064453% Error: 74.68354797363281%

2.accuracy :

Train Doğruluk:
Got 180 / 200 with accuracy 90.00

Test Doğruluk:
Got 43 / 79 with accuracy 54.43```

You are freezing all parameters of the untrained model besides the new linear layer in model.fc.
Since the model wasn’t pretrained, I would assume that your training fails as you are trying to use transfer learning to a randomly initialized model.

Also, model.fc2, model.fc3, etc. won’t be used unless you override the forward and use them explicitly. If you want to use more than a single new linear layer in model.fc, replace model.fc with an nn.Sequential container.

thanks @ptrblck. Thank you for your help. What I don’t understand is this. I want to use pre-trained weights and a proven model. When I say .pretrained=True, it downloads the pth file and these weights. I want to use them. but don’t I need to freeze the parameters? what happens if i don’t freeze the parameters? I don’t want to train from scratch I want to use a certain model and weights and add a few final layers to the model. i just want to train the layers i add

Sorry, I might have looked at another post and saw pretrained=False in the code snippet.
You are right that you are already using a pretrained model, so skip this part and check the last issue (i.e. replacing the model.fc layer with nn.Sequential).

thank you @ptrblck I did it as sequential as you said, but my problem is still not fixed. The ready-made models and weights I use are not suitable for my dataset. What do you think, could it be caused by this? and is my code correct

If you assume the pretrained weights are not suitable for your task, you could either fine-tune the entire model (so remove freezing the parameters) or train it from scratch.