I use the pytorch imagenet example on a custom dataset like this:
python main.py --arch=alexnet dataset/
My dataset has nearly 300 categories, and 12000 images totally. The dataset is organized in train
and val
directories.
The portions of the training output are below, and from the output, you can see that the top-1 and top-5 precision nearly don’t change and improve, always equals Prec@1 0.333 Prec@5 1.667
.
So I just wonder why this happens?
> creating model 'alexnet'
Epoch: [0][0/29] Time 11.987 (11.987) Data 10.121 (10.121) Loss 6.9067 (6.9067) Prec@1 0.391 (0.391) Prec@5 0.781 (0.781)
Epoch: [0][10/29] Time 0.336 (2.764) Data 0.266 (2.488) Loss 6.8902 (6.9003) Prec@1 0.000 (0.178) Prec@5 1.172 (1.598)
Epoch: [0][20/29] Time 7.898 (2.771) Data 7.827 (2.578) Loss 6.8422 (6.8640) Prec@1 0.000 (0.186) Prec@5 1.953 (1.581)
Test: [0/10] Time 4.700 (4.700) Loss 6.7994 (6.7994) Prec@1 0.000 (0.000) Prec@5 3.125 (3.125)
* Prec@1 0.333 Prec@5 1.625
Epoch: [1][0/29] Time 2.927 (2.927) Data 2.847 (2.847) Loss 6.8089 (6.8089) Prec@1 0.000 (0.000) Prec@5 2.734 (2.734)
Epoch: [1][10/29] Time 0.192 (0.822) Data 0.025 (0.681) Loss 6.7899 (6.7945) Prec@1 0.391 (0.320) Prec@5 1.953 (1.776)
Epoch: [1][20/29] Time 2.253 (0.824) Data 2.183 (0.689) Loss 6.4336 (6.7144) Prec@1 0.391 (0.316) Prec@5 3.516 (1.730)
Test: [0/10] Time 3.146 (3.146) Loss 6.0892 (6.0892) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000)
* Prec@1 0.333 Prec@5 1.667
Epoch: [2][0/29] Time 3.009 (3.009) Data 2.920 (2.920) Loss 6.0913 (6.0913) Prec@1 0.391 (0.391) Prec@5 1.953 (1.953)
Epoch: [2][10/29] Time 0.189 (0.836) Data 0.000 (0.681) Loss 6.0209 (6.0952) Prec@1 0.391 (0.320) Prec@5 0.391 (1.562)
Epoch: [2][20/29] Time 2.251 (0.822) Data 2.181 (0.680) Loss 5.9183 (6.0205) Prec@1 0.000 (0.223) Prec@5 0.781 (1.302)
Test: [0/10] Time 3.046 (3.046) Loss 5.9031 (5.9031) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000)
* Prec@1 0.333 Prec@5 1.667
Epoch: [46][0/29] Time 2.996 (2.996) Data 2.915 (2.915) Loss 5.7088 (5.7088) Prec@1 0.000 (0.000) Prec@5 0.781 (0.781)
Epoch: [46][10/29] Time 0.188 (0.844) Data 0.000 (0.696) Loss 5.7168 (5.7085) Prec@1 0.000 (0.178) Prec@5 1.562 (1.705)
Epoch: [46][20/29] Time 2.090 (0.828) Data 2.011 (0.685) Loss 5.7267 (5.7122) Prec@1 0.000 (0.205) Prec@5 0.781 (1.562)
Test: [0/10] Time 3.080 (3.080) Loss 5.7117 (5.7117) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000)
* Prec@1 0.333 Prec@5 1.667
Epoch: [47][0/29] Time 2.943 (2.943) Data 2.852 (2.852) Loss 5.7018 (5.7018) Prec@1 0.781 (0.781) Prec@5 3.125 (3.125)
Epoch: [47][10/29] Time 0.196 (0.852) Data 0.000 (0.701) Loss 5.7113 (5.7091) Prec@1 0.781 (0.355) Prec@5 1.953 (1.953)
Epoch: [47][20/29] Time 2.221 (0.838) Data 2.136 (0.695) Loss 5.7153 (5.7120) Prec@1 0.000 (0.260) Prec@5 1.562 (1.656)
Test: [0/10] Time 3.071 (3.071) Loss 5.7107 (5.7107) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000)
* Prec@1 0.333 Prec@5 1.667
Epoch: [48][0/29] Time 3.054 (3.054) Data 2.978 (2.978) Loss 5.7045 (5.7045) Prec@1 0.391 (0.391) Prec@5 2.344 (2.344)
Epoch: [48][10/29] Time 0.182 (0.837) Data 0.000 (0.689) Loss 5.7104 (5.7084) Prec@1 0.391 (0.249) Prec@5 2.734 (1.847)
Epoch: [48][20/29] Time 1.824 (0.819) Data 1.753 (0.700) Loss 5.7171 (5.7120) Prec@1 0.391 (0.242) Prec@5 0.781 (1.488)
Test: [0/10] Time 3.084 (3.084) Loss 5.7100 (5.7100) Prec@1 0.000 (0.000) Prec@5 3.125 (3.125)
* Prec@1 0.333 Prec@5 1.667
Epoch: [49][0/29] Time 3.213 (3.213) Data 3.137 (3.137) Loss 5.7120 (5.7120) Prec@1 0.781 (0.781) Prec@5 1.172 (1.172)
Epoch: [49][10/29] Time 0.182 (0.869) Data 0.000 (0.713) Loss 5.7154 (5.7094) Prec@1 0.000 (0.426) Prec@5 0.781 (1.456)
Epoch: [49][20/29] Time 2.013 (0.829) Data 1.931 (0.696) Loss 5.7096 (5.7113) Prec@1 0.781 (0.316) Prec@5 2.734 (1.376)
Test: [0/10] Time 3.072 (3.072) Loss 5.7060 (5.7060) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000)
* Prec@1 0.333 Prec@5 1.667