Firstly, I found the training log was changed when I applied the dropout layer. I ensured other random was fixed by setting rand seed. How can I fix the dropout random?
Secondly, when I want to enable dropout after
net.eval(), I can use
net.apply(apply_dropout). The apply_dropout function is:
def apply_dropout(m): if type(m) == nn.Dropout: m.train()
However, when I wanted to disable the dropout layer in the training step using inverse operation, it did not work. Specifically, I used
net.train(). The de_apply_dropout function is:
def de_apply_dropout(m): if type(m) == nn.Dropout: m.eval()
The purpose of the above operation is that I want to enable the dropout layer in some epochs and disable it in other epochs when the network is trained. How can I solve it?
Thanks in advance.