I’m doing Semantic Parsing problem for Question Answering Knowledge Graph.
Here is an example:
Q: Who is Barack Obama?
Cypher: MATCH (n) WHERE n.name = “Barack Obama”
I used RNN-based Seq2Seq architecture as my baseline to check if the model is capable of capturing new entities appear in the question. As my corpus is limited, there are a lot of unseen entities (unseen token). I have try to modify the output which is just Barack Obama instead of the whole query sentence. But it seems doesn’t work with unseen entities. Like the model cannot copy paste the entities string in the input sentence into the output sentence. Should I train another NER model or using a pretrained token embedding will help in this case?