Zero-shot Cross-lingual Conversational Semantic Role Labeling

ACL ARR November 2021  ·  Anonymous ·

While conversational semantic role labeling (CSRL) has shown its usefulness on Chinese conversational tasks, it is still under-explored in non-Chinese languages due to the lack of multilingual CSRL annotations for the parser training. To avoid expensive data collection and error-propagation of translation-based methods, we present a simple but effective approach to perform zero-shot cross-lingual CSRL. Our model implicitly learns language-agnostic, conversational structure-aware and semantically rich representations with the hierarchical encoders and elaborately designed pre-training objectives. Experimental results show that our cross-lingual model not only outperforms baselines by large margins but it is also robust to low-resource scenarios. More importantly, we confirm the usefulness of CSRL to English conversational tasks such as question-in-context rewriting and multi-turn dialogue response generation by incorporating the CSRL information into the downstream conversation-based models. We believe this finding is significant and will facilitate the research of English dialogue tasks which suffer the problems of ellipsis and anaphora.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here