Can't get finetuning without freezing weights

Hi. I’m trying to use FineTuning with AlexNet. Actually, if I use AlexNet pretrained setting requires_grad=False and adding new layers, it works quite well after ten epochs. The problem arises when I don’t set requires_grad=False because I want AlexNet layers to train too. I get very low accuracy test, I tried with various learning rate but can’t get past a validation accuracy of 0.11, even after 20 epochs. Is this normal? Do I have to try with more epochs,like 100, to get a better result? Or am I missing something?