I would like to ask if I could do this.
input → layer1,2,3 → layer4.5.6 (CNN and RNN) → predicted output.
then i have a loss function with L1 loss on predicted output and target output.
i would like to freeze layer4,5.6 weight, such that the loss will update the parameters in layer1,2,3 only.
I tried by setting:
then call forwards of layer4,5,6.
but got the error when call loss backward:
RuntimeError: cudnn RNN backward can only be called in training mode
many thanks for advice.