Issues with Chatbot NLP tutorial, Average loss not come down and generated sentence with meaningless ending

Hello, I posted an issue on offilce tutorial github.
Pls check

First of all I run Cornell Movie-Dialogs Corpus, it performs good.

Then I changed the corpus to qingyun
It works well too.

At last, I changed the corpus to our own chat corpus, this time it failed, when training peroid, the average loss stop coming down when reach to a high level, around 5%(when I use qingyun corpus it reached 0.32%). Besides the bot’s answer is obvious not correct, not related at all.

Do any one can help? Any parameters I can change, or our own corpus can not be used at all.
Thanks in advance!

While I understand you might get frustrated seeing your use case fail, I would like to ask you to use the proper language in this board, please.

I would recommend to scale down the use case and try to overfit a small data samples (e.g. just 10 samples) by playing around with the hyper-parameters of your training (learning rate, weigth decay etc.).
Once your model can properly overfit the sample, you could try to carefully scale up the use case again by adding more data and observing the training.

Thanks for your reply, sorry for my un-proper words.