哈工大车万翔团队:口语语言理解的最新进展与前沿
论文名称:A Survey on Spoken Language Understanding: Recent Advances and New Frontiers
论文作者:覃立波,谢天宝,车万翔,刘挺
原创作者:谢天宝,覃立波
论文链接:https://arxiv.org/abs/2103.03095
仓库链接:https://github.com/yizhen20133868/Awesome-SLU-Survey
2021年北京智源大会
(还有更多日常学术活动,^_^)
交易担保 智源社区 2021年北京智源大会报名 小程序
口语语言理解 作为任务型对话系统的核心组件,目的是为了获取用户询问语句的框架语义表示信息,进而将这些信息为对话状态追踪模块(DST)以及自然语言生成模块(NLG)所使用。
意图识别任务(intent detection)
槽位填充任务(slot filling)
1)最近SLU领域进展的全面总结;
2)复杂情景下研究的挑战和机遇;
3)SLU全面的代码,数据集等资源。
01
02
03
相关论文
[1] Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Proceedings of the workshop on Speech and Natural Language (HLT '90). Association for Computational Linguistics, USA, 96–101. DOI:https://doi.org/10.3115/116580.116613
[2] Alice Coucke, Alaa Saade, Adrien Ball, Th´eodore Bluche, Alexandre Caulier, David Leroy, Cl´ement Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190, 2018.
[3] Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. Recurrent neural networks for language understanding. In Interspeech, 2013.
[4] Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. Spoken language understanding using long short-term memory neural networks. In SLT, 2014.
[5] Kaisheng Yao, Baolin Peng, GeoffreyZweig, Dong Yu, Xiaolong Li, and Feng Gao. Recurrent conditional random field for language understanding. In ICASSP, 2014.
[6] Gakuto Kurata, Bing Xiang, Bowen Zhou, and Mo Yu. Leveraging sentence-level information with encoder LSTM for semantic slot filling. In Proc. Of EMNLP, 2016.
[7] Xiaodong Zhang and Houfeng Wang. A joint model of intent determination and slot filling for spoken language understanding. In Proc. of IJCAI, 2016.
[8] Bing Liu and Ian Lane. Attention-based recurrent neural network models for joint intent detection and slot filling. In Interspeech, 2016.
[9] Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. Slot-gated modeling for joint slot filling and intent prediction. In Proc. of NAACL, 2018.
[10] Changliang Li, Liang Li, and Ji Qi. A self-attentive model with gate mechanism for spoken language understanding. In Proc. of EMNLP, 2018.
[11] Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. A stack-propagation framework with token-level intent detection for spoken language understanding. In Proc. of EMNLP-IJCNLP, 2019.
[12] Yu Wang, Yilin Shen, and Hongxia Jin. A bi-model based RNN semantic frame parsing model for intent detection and slot filling. In Proc. of NAACL, 2018.
[13] Haihong E, Peiqing Niu, Zhongfu Chen, and Meina Song. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proc. of ACL, 2019.
[14] Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, and Ting Liu. A counteractive transformer for joint slot filling and intent detection. In ICASSP, 2021.
[15] Giuseppe Castellucci, Valentina Bellomaria, Andrea Favalli, and Raniero Romagnoli. Multi-lingual intent detection and slot filling in a joint bertbased model. arXiv preprint arXiv:1907.02884, 2019.
[16] Qian Chen, Zhu Zhuo, and Wen Wang. Bert for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909, 2019.
[17] Yun-Nung Vivian Chen, Dilek Hakkani-T¨ur, Gokhan Tur, Jianfeng Gao, and Li Deng. End-to-end memory networks with knowledge carryover for multiturn spoken language understanding. In Interspeech, 2016.
[18] L. Qin, W. Che, M. Ni, Y. Li, and T. Liu. Knowing where to leverage: Context-aware graph convolution network with an adaptive fusion layer for contextual spoken language understanding. TASLP, 2021.
[19] Rashmi Gangadharaiah and Balakrishnan Narayanaswamy. Joint multiple intent detection and slot labeling for goaloriented dialog. In Proc. of NAACL, 2019.
[20] Dechuang Teng, Libo Qin, Wanxiang Che, Sendong Zhao, and Ting Liu. Injecting word information with multi-level word adapter for chinese spoken language understanding. In ICASSP, 2021.
2021年北京智源大会