I want to use a pretrained model to supervise another model, which need to set the batch normalization into eval mode. However, with lstm in my pretrained model, it arises follow message:
Traceback (most recent call last): File "main.py", line 153, in val cost.backward() File "/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/variable.py", line 145, in backward self._execution_engine.run_backward((self,), (gradient,), retain_variables) File "/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 208, in _do_backward result = super(NestedIOFunction, self)._do_backward(gradients, retain_variables) File "/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 216, in backward result = self.backward_extended(*nested_gradients) File "/home/jrmei/.local/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 199, in backward_extended self._reserve_clone = self.reserve.clone() AttributeError: 'CudnnRNN' object has no attribute 'reserve'
Do you have any suggestions? Thanks very much.