no code implementations • Findings (EMNLP) 2021 • Kaiyu Huang, Hao Yu, Junpeng Liu, Wei Liu, Jingxiang Cao, Degen Huang
Experimental results on five benchmarks and four cross-domain datasets show the lexicon-based graph convolutional network successfully captures the information of candidate words and helps to improve performance on the benchmarks (Bakeoff-2005 and CTB6) and the cross-domain datasets (SIGHAN-2010).
no code implementations • WMT (EMNLP) 2021 • Huan Liu, Junpeng Liu, Kaiyu Huang, Degen Huang
This paper describes DUT-NLP Lab’s submission to the WMT-21 triangular machine translation shared task.
no code implementations • COLING 2022 • Junpeng Liu, Yanyan Zou, Yuxuan Xi, Shengjie Li, Mian Ma, Zhuoye Ding
In this work, rather than directly forcing a summarization system to merely pay more attention to the salient pieces, we propose to explicitly have the model perceive the redundant parts of an input dialogue history during the training phase.
no code implementations • AACL (iwdp) 2020 • Kaiyu Huang, Junpeng Liu, Jingxiang Cao, Degen Huang
This paper proposes a three-step strategy to improve the performance for discourse CWS.
1 code implementation • 21 Nov 2024 • Yanfeng Ji, Shutong Wang, Ruyi Xu, Jingying Chen, Xinzhou Jiang, Zhengyu Deng, Yuxuan Quan, Junpeng Liu
Children with Autism Spectrum Disorder (ASD) often exhibit atypical facial expressions.
no code implementations • 17 Oct 2024 • Junpeng Liu, Tianyue Ou, YiFan Song, Yuxiao Qu, Wai Lam, Chenyan Xiong, Wenhu Chen, Graham Neubig, Xiang Yue
Text-rich visual understanding-the ability to process environments where dense textual content is integrated with visuals-is crucial for multimodal large language models (MLLMs) to interact effectively with structured environments.
1 code implementation • 9 Apr 2024 • Junpeng Liu, YiFan Song, Bill Yuchen Lin, Wai Lam, Graham Neubig, Yuanzhi Li, Xiang Yue
Multimodal Large Language models (MLLMs) have shown promise in web-related tasks, but evaluating their performance in the web domain remains a challenge due to the lack of comprehensive benchmarks.
1 code implementation • Findings (EMNLP) 2021 • Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan, Xiaojie Wang
To capture the various topic information of a conversation and outline salient facts for the captured topics, this work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives, which are expected to implicitly model the topic change and handle information scattering challenges for the dialogue summarization task.
Ranked #5 on
Text Summarization
on SAMSum