TY - GEN
T1 - Plug-and-Play Adaptation for Continuously-updated QA
AU - Lee, Kyungjae
AU - Han, Wookje
AU - Hwang, Seung Won
AU - Lee, Hwaran
AU - Park, Joonsuk
AU - Lee, Sang Woo
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task-Continuously-updated QA (CuQA)-in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.
AB - Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task-Continuously-updated QA (CuQA)-in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.
UR - http://www.scopus.com/inward/record.url?scp=85136758981&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.findings-acl.37
DO - 10.18653/v1/2022.findings-acl.37
M3 - Conference contribution
AN - SCOPUS:85136758981
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 438
EP - 447
BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022
A2 - Muresan, Smaranda
A2 - Nakov, Preslav
A2 - Villavicencio, Aline
PB - Association for Computational Linguistics (ACL)
T2 - Findings of the Association for Computational Linguistics: ACL 2022
Y2 - 22 May 2022 through 27 May 2022
ER -