Abstract
Knowledge-grounded conversation models aim at generating informative responses for the given dialogue context, based on external knowledge. To generate an informative and context-coherent response, it is important to conjugate dialogue context and external knowledge in a balanced manner. However, existing studies have paid less attention to finding appropriate knowledge sentences from external knowledge sources than to generating proper sentences with correct dialogue acts. In this paper, we propose two knowledge selection strategies: 1) Reduce-Match and 2) Match-Reduce and explore several neural knowledge-grounded conversation models based on each strategy. Models based on Reduce-Match strategy first distill the whole dialogue context into a single vector with salient features preserved and then compare this context vector with the representation of knowledge sentences to predict a relevant knowledge sentence. Models based on Match-Reduce strategy first match every turn of the context with knowledge sentences to capture fine-grained interactions and aggregate them while minimizing information loss to predict the knowledge sentence. Experimental results show that conversation models using each of our knowledge selection strategies outperform the competitive baselines not only in terms of knowledge selection accuracy but also in response generation performance. Our best model based on Match-Reduce outperforms the baselines in the comparative studies with the Wizard of Wikipedia dataset. Also, our best model based on Reduce-Match outperforms them with the CMU Document Grounded Conversations dataset.
Original language | English |
---|---|
Article number | 9136717 |
Pages (from-to) | 126201-126214 |
Number of pages | 14 |
Journal | IEEE Access |
Volume | 8 |
DOIs | |
State | Published - 2020 |
Keywords
- Dialogue
- knowledge selection
- knowledge-grounded conversation
- text matching