no code implementations • FNP (LREC) 2022 • Chenyang Lyu, Tianbo Ji, Quanwei Sun, Liting Zhou
In this paper, we describe our DCU-Lorcan system for the FinCausal 2022 shared task: span-based cause and effect extraction from financial documents.
no code implementations • 21 Feb 2024 • Chenyang Lyu, Minghao Wu, Alham Fikri Aji
Large Language Models (LLMs) have demonstrated remarkable capabilities across various applications, fundamentally reshaping the landscape of natural language processing (NLP) research.
no code implementations • 4 Dec 2023 • Bingshuai Liu, Chenyang Lyu, Zijun Min, Zhanyu Wang, Jinsong Su, Longyue Wang
The advancement of Large Language Models (LLMs) has brought substantial attention to the Chain of Thought (CoT) approach, primarily due to its ability to enhance the capability of LLMs on complex reasoning tasks.
no code implementations • 25 Nov 2023 • Zhanyu Wang, Longyue Wang, Zhen Zhao, Minghao Wu, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou, Shuming Shi, Zhaopeng Tu
While the recent advances in Multimodal Large Language Models (MLLMs) constitute a significant leap forward in the field, these models are predominantly confined to the realm of input-side multimodal comprehension, lacking the capacity for multimodal content generation.
no code implementations • 13 Nov 2023 • Yunxin Li, Longyue Wang, Baotian Hu, Xinyu Chen, Wanqi Zhong, Chenyang Lyu, Wei Wang, Min Zhang
The emergence of multimodal large models (MLMs) has significantly advanced the field of visual understanding, offering remarkable capabilities in the realm of visual question answering (VQA).
no code implementations • 6 Nov 2023 • Longyue Wang, Zhaopeng Tu, Yan Gu, Siyou Liu, Dian Yu, Qingsong Ma, Chenyang Lyu, Liting Zhou, Chao-Hong Liu, Yufeng Ma, WeiYu Chen, Yvette Graham, Bonnie Webber, Philipp Koehn, Andy Way, Yulin Yuan, Shuming Shi
To foster progress in this domain, we hold a new shared task at WMT 2023, the first edition of the Discourse-Level Literary Translation.
no code implementations • 6 Jul 2023 • Bingshuai Liu, Longyue Wang, Chenyang Lyu, Yong Zhang, Jinsong Su, Shuming Shi, Zhaopeng Tu
Accordingly, we propose a novel multi-modal metric that considers object-text alignment to filter the fine-tuning data in the target culture, which is used to fine-tune a T2I model to improve cross-cultural generation.
1 code implementation • 15 Jun 2023 • Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied.
no code implementations • 23 May 2023 • Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.
no code implementations • 16 May 2023 • Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster
We show that by integrating our approach into VideoQA systems we can achieve comparable, even superior, performance with a significant speed up for training and inference.
no code implementations • 14 May 2023 • Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster
Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.)
no code implementations • 2 May 2023 • Chenyang Lyu, Zefeng Du, Jitao Xu, Yitao Duan, Minghao Wu, Teresa Lynn, Alham Fikri Aji, Derek F. Wong, Siyou Liu, Longyue Wang
We conclude by emphasizing the critical role of LLMs in guiding the future evolution of MT and offer a roadmap for future exploration in the sector.
1 code implementation • 5 Apr 2023 • Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, Zhaopeng Tu
Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.
1 code implementation • 23 Mar 2023 • Chenyang Lyu, Manh-Duy Nguyen, Van-Tu Ninh, Liting Zhou, Cathal Gurrin, Jennifer Foster
Recent years have witnessed an increasing amount of dialogue/conversation on the web especially on social media.
no code implementations • 17 Dec 2022 • Chenyang Lyu, Linyi Yang, Yue Zhang, Yvette Graham, Jennifer Foster
User and product information associated with a review is useful for sentiment polarity prediction.
no code implementations • 9 Oct 2022 • Tianbo Ji, Chenyang Lyu, Gareth Jones, Liting Zhou, Yvette Graham
Question Generation (QG) aims to automate the task of composing questions for a passage with a set of chosen answers found within the passage.
1 code implementation • insights (ACL) 2022 • Chenyang Lyu, Jennifer Foster, Yvette Graham
Past works that investigate out-of-domain performance of QA systems have mainly focused on general domains (e. g. news domain, wikipedia domain), underestimating the importance of subdomains defined by the internal characteristics of QA datasets.
1 code implementation • ACL 2022 • Tianbo Ji, Yvette Graham, Gareth J. F. Jones, Chenyang Lyu, Qun Liu
Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost.
no code implementations • EMNLP 2021 • Chenyang Lyu, Lifeng Shang, Yvette Graham, Jennifer Foster, Xin Jiang, Qun Liu
Template-based QG uses linguistically-informed heuristics to transform declarative sentences into interrogatives, whereas supervised QG uses existing Question Answering (QA) datasets to train a system to generate a question given a passage and an answer.
1 code implementation • COLING 2020 • Chenyang Lyu, Jennifer Foster, Yvette Graham
We achieve this by explicitly storing representations of reviews written by the same user and about the same product and force the model to memorize all reviews for one particular user and product.
1 code implementation • 5 Oct 2020 • Yuwei Li, Shouling Ji, Yuan Chen, Sizhuang Liang, Wei-Han Lee, Yueyao Chen, Chenyang Lyu, Chunming Wu, Raheem Beyah, Peng Cheng, Kangjie Lu, Ting Wang
We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.
Cryptography and Security
no code implementations • 7 Jul 2018 • Chenyang Lyu, Shouling Ji, Yuwei Li, Junfeng Zhou, Jian-hai Chen, Jing Chen
In total, our system discovers more than twice unique crashes and 5, 040 extra unique paths than the existing best seed selection strategy for the evaluated 12 applications.
Cryptography and Security