1 code implementation • EMNLP 2020 • Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, Luo Si
Peer review and rebuttal, with rich interactions and argumentative discussions in between, are naturally a good resource to mine arguments.
Ranked #3 on
Argument Pair Extraction (APE)
on RR
1 code implementation • 1 Dec 2023 • Xuan-Phi Nguyen, Wenxuan Zhang, Xin Li, Mahani Aljunied, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen yang, Chaoqun Liu, Hang Zhang, Lidong Bing
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages.
1 code implementation • 15 Nov 2023 • Guizhen Chen, Liying Cheng, Luu Anh Tuan, Lidong Bing
As large language models (LLMs) have demonstrated strong abilities in understanding context and generating natural language, it is worthwhile to evaluate the performance of LLMs on various computational argumentation tasks.
no code implementations • 17 Oct 2023 • Huiming Wang, Liying Cheng, Zhaodonghui Li, De Wen Soh, Lidong Bing
However, to train a contrastive learning model, large numbers of labeled sentences are required to construct positive and negative pairs explicitly, such as those in natural language inference (NLI) datasets.
1 code implementation • 31 May 2023 • Jia Guo, Liying Cheng, Wenxuan Zhang, Stanley Kok, Xin Li, Lidong Bing
In this work, we for the first time propose a challenging argument quadruplet extraction task (AQE), which can provide an all-in-one extraction of four argumentative components, i. e., claims, evidence, evidence types, and stances.
1 code implementation • 24 May 2023 • Liying Cheng, Xingxuan Li, Lidong Bing
As large language models (LLMs) have demonstrated their powerful capabilities in plenty of domains and tasks, including context understanding, code generation, language generation, data storytelling, etc., many data analysts may raise concerns if their jobs will be replaced by artificial intelligence (AI).
1 code implementation • 24 May 2023 • Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing
Our preliminary experiments show that generating intermediate reasoning steps does not always boost the performance of complex temporal question-answering tasks.
1 code implementation • 22 May 2023 • Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, Lidong Bing
With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks.
no code implementations • 19 May 2023 • Huiming Wang, Liying Cheng, Wenxuan Zhang, De Wen Soh, Lidong Bing
Recently, data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER).
1 code implementation • 15 May 2023 • Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, Lidong Bing
Pre-trained language models (PLMs) have achieved outstanding achievements in abstractive single-document summarization (SDS).
1 code implementation • 26 Oct 2022 • Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, Luo Si
A wide range of control perspectives have been explored in controllable text generation.
1 code implementation • ACL 2022 • Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si
Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.
Claim-Evidence Pair Extraction (CEPE)
Claim Extraction with Stance Classification (CESC)
+1
1 code implementation • Findings (ACL) 2022 • Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si
A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with a deep understanding of the domain knowledge.
1 code implementation • ACL 2021 • Liying Cheng, Tianyu Wu, Lidong Bing, Luo Si
Prior research work treats this task as a sequence labeling problem and a binary classification problem on two passages that are directly concatenated together, which has a limitation of not fully utilizing the unique characteristics and inherent relations of two different passages.
Ranked #2 on
Argument Pair Extraction (APE)
on RR
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • EMNLP 2020 • Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, Luo Si
Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.
Ranked #1 on
KG-to-Text Generation
on ENT-DESC