Getting this RuntimeError:Cudnn RNN backward can only be called in training mode

I’m using Pytorch 1.0 Pythn 3.7.3 Cuda 10.0 CudNn 7.5 got This ERROR

Traceback (most recent call last):
File “main.py”, line 96, in
main(args)
File “main.py”, line 70, in main
disc_loss, gen_loss = trainer.train_one_epoch_adversarial()
File “C:\Users\CSE495-NBM\Desktop\unmt_2\libs\trainer.py”, line 432, in train_one_epoch_adversarial
loss.backward()
File “C:\Users\CSE495-NBM\Miniconda3\lib\site-packages\torch\tensor.py”, line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “C:\Users\CSE495-NBM\Miniconda3\lib\site-packages\torch\autograd_init_.py”, line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cudnn RNN backward can only be called in training mode

Hi Sajid,

Could you post some of your code? Are you doing model.eval() at some point before calling backward?

I believe it will work by calling trainer.train() before:

disc_loss, gen_loss = trainer.train_one_epoch_adversarial()

Make sure that training and testing procedure is as follows:


model = my_model()

for epoch in Epochs:
  model.train()
  for train_batch in Batches:
            ######Train
  model.eval()
  for test_batch in batches:
            ######Test


 







Loading model parameters.
/usr/local/lib/python3.7/dist-packages/torchtext/data/field.py:197: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(arr, volatile=not train), lengths
/usr/local/lib/python3.7/dist-packages/torchtext/data/field.py:198: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(arr, volatile=not train)
/content/Seq2Sick/onmt/translate/Translator.py:48: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  def var(a): return Variable(a, volatile=True)
/content/Seq2Sick/onmt/modules/GlobalAttention.py:179: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  align_vectors = self.sm(align.view(batch*targetL, sourceL))
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py:119: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  input = module(input)
/content/Seq2Sick/onmt/translate/Translator.py:191: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  src.volatile = False
attack.py:64: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  output_a, attn, output_i= translator.getOutput(new_embedding, src, batch)
tensor(18.6335, device='cuda:0') 	 tensor(0., device='cuda:0')
tensor(999., device='cuda:0') 	 tensor(0., device='cuda:0')
Traceback (most recent call last):
  File "attack.py", line 312, in <module>
    main()
  File "attack.py", line 272, in main
    modifier, output_a, attn, new_word, output_i, CFLAG = attack(all_word_embedding, label_onehot, translator, src, batch, new_embedding, input_embedding, modifier, const, GROUP_LASSO, TARGETED, GRAD_REG, NN)
  File "attack.py", line 138, in attack
    loss.backward(retain_graph=True)
  File "/usr/local/lib/python3.7/dist-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 147, in backward
    allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
RuntimeError: cudnn RNN backward can only be called in training mode

@Prerna_Dhareshwar @Sajid_Ahmed

Please have a look at my issue mentioned above

@Thabang_Lukhetho @aknirala

Please have a look at my issue mentioned above

Answered in your cross post.