Search Results for author: Chenyang Lyu

Found 22 papers, 7 papers with code

DCU-Lorcan at FinCausal 2022: Span-based Causality Extraction from Financial Documents using Pre-trained Language Models

no code implementations FNP (LREC) 2022 Chenyang Lyu, Tianbo Ji, Quanwei Sun, Liting Zhou

In this paper, we describe our DCU-Lorcan system for the FinCausal 2022 shared task: span-based cause and effect extraction from financial documents.

Beyond Probabilities: Unveiling the Misalignment in Evaluating Large Language Models

no code implementations21 Feb 2024 Chenyang Lyu, Minghao Wu, Alham Fikri Aji

Large Language Models (LLMs) have demonstrated remarkable capabilities across various applications, fundamentally reshaping the landscape of natural language processing (NLP) research.

Multiple-choice

Retrieval-augmented Multi-modal Chain-of-Thoughts Reasoning for Large Language Models

no code implementations4 Dec 2023 Bingshuai Liu, Chenyang Lyu, Zijun Min, Zhanyu Wang, Jinsong Su, Longyue Wang

The advancement of Large Language Models (LLMs) has brought substantial attention to the Chain of Thought (CoT) approach, primarily due to its ability to enhance the capability of LLMs on complex reasoning tasks.

Question Answering Retrieval

GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation

no code implementations25 Nov 2023 Zhanyu Wang, Longyue Wang, Zhen Zhao, Minghao Wu, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou, Shuming Shi, Zhaopeng Tu

While the recent advances in Multimodal Large Language Models (MLLMs) constitute a significant leap forward in the field, these models are predominantly confined to the realm of input-side multimodal comprehension, lacking the capacity for multimodal content generation.

Instruction Following Language Modelling +7

A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question Answering

no code implementations13 Nov 2023 Yunxin Li, Longyue Wang, Baotian Hu, Xinyu Chen, Wanqi Zhong, Chenyang Lyu, Wei Wang, Min Zhang

The emergence of multimodal large models (MLMs) has significantly advanced the field of visual understanding, offering remarkable capabilities in the realm of visual question answering (VQA).

Decision Making General Knowledge +3

On the Cultural Gap in Text-to-Image Generation

no code implementations6 Jul 2023 Bingshuai Liu, Longyue Wang, Chenyang Lyu, Yong Zhang, Jinsong Su, Shuming Shi, Zhaopeng Tu

Accordingly, we propose a novel multi-modal metric that considers object-text alignment to filter the fine-tuning data in the target culture, which is used to fine-tune a T2I model to improve cross-cultural generation.

Text-to-Image Generation

Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

1 code implementation15 Jun 2023 Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu

Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied.

Language Modelling

Out-of-Distribution Generalization in Text Classification: Past, Present, and Future

no code implementations23 May 2023 Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang

Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.

Out-of-Distribution Generalization text-classification +1

Is a Video worth $n\times n$ Images? A Highly Efficient Approach to Transformer-based Video Question Answering

no code implementations16 May 2023 Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster

We show that by integrating our approach into VideoQA systems we can achieve comparable, even superior, performance with a significant speed up for training and inference.

Question Answering Video Question Answering

Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering

no code implementations14 May 2023 Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster

Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.)

Question Answering Semantic Role Labeling +1

A Paradigm Shift: The Future of Machine Translation Lies with Large Language Models

no code implementations2 May 2023 Chenyang Lyu, Zefeng Du, Jitao Xu, Yitao Duan, Minghao Wu, Teresa Lynn, Alham Fikri Aji, Derek F. Wong, Siyou Liu, Longyue Wang

We conclude by emphasizing the critical role of LLMs in guiding the future evolution of MT and offer a roadmap for future exploration in the sector.

Document Translation Machine Translation +2

Document-Level Machine Translation with Large Language Models

1 code implementation5 Apr 2023 Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, Zhaopeng Tu

Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.

Document Level Machine Translation Machine Translation +1

Dialogue-to-Video Retrieval

1 code implementation23 Mar 2023 Chenyang Lyu, Manh-Duy Nguyen, Van-Tu Ninh, Liting Zhou, Cathal Gurrin, Jennifer Foster

Recent years have witnessed an increasing amount of dialogue/conversation on the web especially on social media.

Recommendation Systems Retrieval +1

QAScore -- An Unsupervised Unreferenced Metric for the Question Generation Evaluation

no code implementations9 Oct 2022 Tianbo Ji, Chenyang Lyu, Gareth Jones, Liting Zhou, Yvette Graham

Question Generation (QG) aims to automate the task of composing questions for a passage with a set of chosen answers found within the passage.

Language Modelling Question Generation +1

Extending the Scope of Out-of-Domain: Examining QA models in multiple subdomains

1 code implementation insights (ACL) 2022 Chenyang Lyu, Jennifer Foster, Yvette Graham

Past works that investigate out-of-domain performance of QA systems have mainly focused on general domains (e. g. news domain, wikipedia domain), underestimating the importance of subdomains defined by the internal characteristics of QA datasets.

Position

Achieving Reliable Human Assessment of Open-Domain Dialogue Systems

1 code implementation ACL 2022 Tianbo Ji, Yvette Graham, Gareth J. F. Jones, Chenyang Lyu, Qun Liu

Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost.

Dialogue Evaluation

Improving Unsupervised Question Answering via Summarization-Informed Question Generation

no code implementations EMNLP 2021 Chenyang Lyu, Lifeng Shang, Yvette Graham, Jennifer Foster, Xin Jiang, Qun Liu

Template-based QG uses linguistically-informed heuristics to transform declarative sentences into interrogatives, whereas supervised QG uses existing Question Answering (QA) datasets to train a system to generate a question given a passage and an answer.

Dependency Parsing named-entity-recognition +8

Improving Document-Level Sentiment Analysis with User and Product Context

1 code implementation COLING 2020 Chenyang Lyu, Jennifer Foster, Yvette Graham

We achieve this by explicitly storing representations of reviews written by the same user and about the same product and force the model to memorize all reviews for one particular user and product.

Sentiment Analysis

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

1 code implementation5 Oct 2020 Yuwei Li, Shouling Ji, Yuan Chen, Sizhuang Liang, Wei-Han Lee, Yueyao Chen, Chenyang Lyu, Chunming Wu, Raheem Beyah, Peng Cheng, Kangjie Lu, Ting Wang

We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.

Cryptography and Security

SmartSeed: Smart Seed Generation for Efficient Fuzzing

no code implementations7 Jul 2018 Chenyang Lyu, Shouling Ji, Yuwei Li, Junfeng Zhou, Jian-hai Chen, Jing Chen

In total, our system discovers more than twice unique crashes and 5, 040 extra unique paths than the existing best seed selection strategy for the evaluated 12 applications.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.