I am new to PyTorch, and I encountered a question.
There is not any change in training loss and validation loss after I changed the learning rate several times (as shown below). But if I change the batchsize, training loss and validation loss on each epoch changed.
I wanna know why the training loss and validation loss on each epoch did not change.
Thanks!
SGD 0.001
-------------------------
7401
1.2.0
There are 1 CUDA devices
Setting torch GPU to 0
Using device:0
begin training!
Epoch:1/150
Train_loss:1.79059
Vali_loss:1.79172
Time_elapse:13.118939876556396
'
Epoch:2/150
Train_loss:1.78832
Vali_loss:1.78894
Time_elapse:23.398069381713867
'
Epoch:3/150
Train_loss:1.78577
Vali_loss:1.78627
Time_elapse:33.67959260940552
'
...
SGD 0.01
-------------------------
7401
1.2.0
There are 1 CUDA devices
Setting torch GPU to 0
Using device:0
begin training!
Epoch:1/150
Train_loss:1.79059
Vali_loss:1.79172
Time_elapse:13.118939876556396
'
Epoch:2/150
Train_loss:1.78832
Vali_loss:1.78894
Time_elapse:23.398069381713867
'
Epoch:3/150
Train_loss:1.78577
Vali_loss:1.78627
Time_elapse:33.67959260940552
'
...
Thanks!
I double checked my code just now, and I think I find the question finally. Because I used learning rate decay, and the learning rate I set was not considered during the training process.