Search Results for author: Liying Cheng

Found 16 papers, 13 papers with code

SeaLLMs -- Large Language Models for Southeast Asia

1 code implementation1 Dec 2023 Xuan-Phi Nguyen, Wenxuan Zhang, Xin Li, Mahani Aljunied, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen yang, Chaoqun Liu, Hang Zhang, Lidong Bing

Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages.

Instruction Following

Exploring the Potential of Large Language Models in Computational Argumentation

1 code implementation15 Nov 2023 Guizhen Chen, Liying Cheng, Luu Anh Tuan, Lidong Bing

As large language models have demonstrated strong abilities in understanding context and generating natural language, it is worthwhile to evaluate the performance of LLMs on various computational argumentation tasks.

Argument Mining

Semantic-Aware Contrastive Sentence Representation Learning with Large Language Models

no code implementations17 Oct 2023 Huiming Wang, Liying Cheng, Zhaodonghui Li, De Wen Soh, Lidong Bing

However, to train a contrastive learning model, large numbers of labeled sentences are required to construct positive and negative pairs explicitly, such as those in natural language inference (NLI) datasets.

Contrastive Learning Natural Language Inference +2

AQE: Argument Quadruplet Extraction via a Quad-Tagging Augmented Generative Approach

1 code implementation31 May 2023 Jia Guo, Liying Cheng, Wenxuan Zhang, Stanley Kok, Xin Li, Lidong Bing

In this work, we for the first time propose a challenging argument quadruplet extraction task (AQE), which can provide an all-in-one extraction of four argumentative components, i. e., claims, evidence, evidence types, and stances.

Argument Mining Stance Classification +1

Is GPT-4 a Good Data Analyst?

1 code implementation24 May 2023 Liying Cheng, Xingxuan Li, Lidong Bing

As large language models (LLMs) have demonstrated their powerful capabilities in plenty of domains and tasks, including context understanding, code generation, language generation, data storytelling, etc., many data analysts may raise concerns if their jobs will be replaced by artificial intelligence (AI).

Code Generation Text Generation

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

1 code implementation24 May 2023 Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing

Our preliminary experiments show that generating intermediate reasoning steps does not always boost the performance of complex temporal question-answering tasks.

Logical Reasoning Math +1

Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization

1 code implementation22 May 2023 Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, Lidong Bing

With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks.

Abstractive Text Summarization

Enhancing Few-shot NER with Prompt Ordering based Data Augmentation

no code implementations19 May 2023 Huiming Wang, Liying Cheng, Wenxuan Zhang, De Wen Soh, Lidong Bing

Recently, data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER).

Data Augmentation few-shot-ner +4

A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document Summarization

1 code implementation15 May 2023 Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, Lidong Bing

Pre-trained language models (PLMs) have achieved outstanding achievements in abstractive single-document summarization (SDS).

Document Summarization Multi-Document Summarization

SentBS: Sentence-level Beam Search for Controllable Summarization

1 code implementation26 Oct 2022 Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, Luo Si

A wide range of control perspectives have been explored in controllable text generation.

Sentence Text Generation

IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks

1 code implementation ACL 2022 Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si

Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.

Claim-Evidence Pair Extraction (CEPE) Claim Extraction with Stance Classification (CESC) +1

MReD: A Meta-Review Dataset for Structure-Controllable Text Generation

1 code implementation Findings (ACL) 2022 Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si

A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with a deep understanding of the domain knowledge.

Text Generation Text Summarization

Argument Pair Extraction via Attention-guided Multi-Layer Multi-Cross Encoding

1 code implementation ACL 2021 Liying Cheng, Tianyu Wu, Lidong Bing, Luo Si

Prior research work treats this task as a sequence labeling problem and a binary classification problem on two passages that are directly concatenated together, which has a limitation of not fully utilizing the unique characteristics and inherent relations of two different passages.

Argument Pair Extraction (APE) Binary Classification

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

no code implementations ACL 2021 Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.

Language Modelling

ENT-DESC: Entity Description Generation by Exploring Knowledge Graph

1 code implementation EMNLP 2020 Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, Luo Si

Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.

Graph-to-Sequence KG-to-Text Generation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.