Search Results for author: Chengwei Qin

Found 19 papers, 8 papers with code

Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?

no code implementations19 Apr 2024 Chengwei Qin, Wenhan Xia, Tan Wang, Fangkai Jiao, Yuchen Hu, Bosheng Ding, Ruirui Chen, Shafiq Joty

One key finding in psychology is that compared with irrelevant past experiences, recalling relevant ones can help humans better handle new tasks.

GSM8K

Lifelong Event Detection with Embedding Space Separation and Compaction

no code implementations3 Apr 2024 Chengwei Qin, Ruirui Chen, Ruochen Zhao, Wenhan Xia, Shafiq Joty

However, the simple combination of memory data and new-task samples can still result in substantial forgetting of previously acquired knowledge, which may occur due to the potential overlap between the feature distribution of new data and the previously learned embedding space.

Event Detection Transfer Learning

How Much are LLMs Contaminated? A Comprehensive Survey and the LLMSanitize Library

1 code implementation31 Mar 2024 Mathieu Ravaut, Bosheng Ding, Fangkai Jiao, Hailin Chen, Xingxuan Li, Ruochen Zhao, Chengwei Qin, Caiming Xiong, Shafiq Joty

With the rise of Large Language Models (LLMs) in recent years, new opportunities are emerging, but also new challenges, and contamination is quickly becoming critical.

Question Answering

Data Augmentation using LLMs: Data Perspectives, Learning Paradigms and Challenges

no code implementations5 Mar 2024 Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie Hu, Anh Tuan Luu, Shafiq Joty

In the rapidly evolving field of machine learning (ML), data augmentation (DA) has emerged as a pivotal technique for enhancing model performance by diversifying training examples without the need for additional data collection.

Data Augmentation

Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing

2 code implementations1 Feb 2024 Fangkai Jiao, Chengwei Qin, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty

Large Language Models (LLMs) have demonstrated significant potential in handling complex reasoning tasks through step-by-step rationale generation.

Hallucination Logical Reasoning

Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning

no code implementations8 Jan 2024 Wenhan Xia, Chengwei Qin, Elad Hazan

Fine-tuning is the primary methodology for tailoring pre-trained large language models to specific tasks.

Benchmarking CoLA +1

Improving In-context Learning via Bidirectional Alignment

no code implementations28 Dec 2023 Chengwei Qin, Wenhan Xia, Fangkai Jiao, Shafiq Joty

Large language models (LLMs) have shown impressive few-shot generalization on many tasks via in-context learning (ICL).

In-Context Learning

Lifelong Sequence Generation with Dynamic Module Expansion and Adaptation

no code implementations15 Oct 2023 Chengwei Qin, Chen Chen, Shafiq Joty

Inspired by the learning paradigm of humans, we propose Dynamic Module Expansion and Adaptation (DMEA), which enables the model to dynamically determine the architecture for acquiring new knowledge based on task correlation and select the most similar previous tasks to facilitate adaptation to new tasks.

Continual Learning Transfer Learning

In-Context Learning with Iterative Demonstration Selection

no code implementations15 Oct 2023 Chengwei Qin, Aston Zhang, Anirudh Dagar, Wenming Ye

The output reasoning path is then used to choose demonstrations that are prepended to the test sample for inference.

Few-Shot Learning In-Context Learning +3

PromptSum: Parameter-Efficient Controllable Abstractive Summarization

no code implementations6 Aug 2023 Mathieu Ravaut, Hailin Chen, Ruochen Zhao, Chengwei Qin, Shafiq Joty, Nancy Chen

Prompt tuning (PT), a parameter-efficient technique that only tunes the additional prompt embeddings while keeping the backbone pre-trained language model (PLM) frozen, has shown promising results in language understanding tasks, especially in low-resource scenarios.

Abstractive Text Summarization Language Modelling

Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech Recognition

1 code implementation18 Jun 2023 Yuchen Hu, Ruizhe Li, Chen Chen, Chengwei Qin, Qiushi Zhu, Eng Siong Chng

In this work, we investigate the noise-invariant visual modality to strengthen robustness of AVSR, which can adapt to any testing noises while without dependence on noisy training data, a. k. a., unsupervised noise adaptation.

Audio-Visual Speech Recognition speech-recognition +1

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

1 code implementation5 May 2023 Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing

As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness.

Open-Domain Question Answering

Retrieving Multimodal Information for Augmented Generation: A Survey

no code implementations20 Mar 2023 Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do, Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, Shafiq Joty

As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs' generation ability, which enables LLMs to better interact with the world.

Retrieval

Is ChatGPT a General-Purpose Natural Language Processing Task Solver?

1 code implementation8 Feb 2023 Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang

Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i. e., without adaptation on downstream data.

Arithmetic Reasoning Zero-Shot Learning

Is GPT-3 a Good Data Annotator?

no code implementations20 Dec 2022 Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, Lidong Bing

In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks.

Language Modelling

Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation

1 code implementation ACL 2022 Chengwei Qin, Shafiq Joty

Existing continual relation learning (CRL) methods rely on plenty of labeled training data for learning a new task, which can be hard to acquire in real scenario as getting large and representative labeled data is often expensive and time-consuming.

Data Augmentation Relation

LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5

1 code implementation ICLR 2022 Chengwei Qin, Shafiq Joty

Existing approaches to lifelong language learning rely on plenty of labeled data for learning a new task, which is hard to obtain in most real scenarios.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.