no code implementations • ACL 2022 • Yimeng Zhuang, Jing Zhang, Mei Tu
(2) A sparse attention matrix estimation module, which predicts dominant elements of an attention matrix based on the output of the previous hidden state cross module.
no code implementations • 26 Apr 2024 • Haojie Zhang, Yimeng Zhuang
Our approach enriches the context by utilizing label semantics as suffix prompts.
no code implementations • 25 Apr 2024 • Shen Zhang, Haojie Zhang, Jing Zhang, Xudong Zhang, Yimeng Zhuang, Jinting Wu
In human-computer interaction, it is crucial for agents to respond to human by understanding their emotions.
no code implementations • SEMEVAL 2021 • Jing Zhang, Yimeng Zhuang, Yinpei Su
This paper describes our system used in the SemEval-2021 Task4 Reading Comprehension of Abstract Meaning, achieving 1st for subtask 1 and 2nd for subtask 2 on the leaderboard.
no code implementations • WS 2020 • Yimeng Zhuang, Yuan Zhang, Lijie Wang
This paper describes the LIT Team{'}s submission to the IWSLT2020 open domain translation task, focusing primarily on Japanese-to-Chinese translation direction.
no code implementations • ACL 2019 • Yimeng Zhuang, Huadong Wang
Multi-passage reading comprehension requires the ability to combine cross-passage information and reason over multiple passages to infer the answer.
no code implementations • SEMEVAL 2019 • Yimeng Zhuang
This paper gives a detailed system description of our submission in SemEval 2019 Task 9 Subtask A.
no code implementations • EMNLP 2018 • Yimeng Zhuang, Jinghui Xie, Yinhe Zheng, Xuan Zhu
Most models for learning word embeddings are trained based on the context information of words, more precisely first order co-occurrence relations.