Search Results for author: Weiwen Xu

Found 14 papers, 10 papers with code

Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks

no code implementations2 Oct 2024 Xingxuan Li, Weiwen Xu, Ruochen Zhao, Fangkai Jiao, Shafiq Joty, Lidong Bing

We validate CR-Planner on challenging domain-knowledge-intensive and reasoning-heavy tasks, including competitive programming, theorem-driven math reasoning, and complex domain retrieval problems.

SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian Languages

1 code implementation29 Jul 2024 Wenxuan Zhang, Hou Pong Chan, Yiran Zhao, Mahani Aljunied, Jianyu Wang, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu, Yew Ken Chia, Xin Li, Lidong Bing

Large Language Models (LLMs) have shown remarkable abilities across various tasks, yet their development has predominantly centered on high-resource languages like English and Chinese, leaving low-resource languages underserved.

Diversity Instruction Following +2

On the Transformations across Reward Model, Parameter Update, and In-Context Prompt

no code implementations24 Jun 2024 Deng Cai, Huayang Li, Tingchen Fu, Siheng Li, Weiwen Xu, Shuaiyi Li, Bowen Cao, Zhisong Zhang, Xinting Huang, Leyang Cui, Yan Wang, Lemao Liu, Taro Watanabe, Shuming Shi

Despite the general capabilities of pre-trained large language models (LLMs), they still need further adaptation to better serve practical applications.

Reasons to Reject? Aligning Language Models with Judgments

1 code implementation22 Dec 2023 Weiwen Xu, Deng Cai, Zhisong Zhang, Wai Lam, Shuming Shi

CUT (LLaMA2-chat-13b) can also align LLMs in an iterative fashion using up-to-date model-specific judgments, improving performance from 81. 09 to 91. 68 points on AlpacaEval.

mPMR: A Multilingual Pre-trained Machine Reader at Scale

1 code implementation23 May 2023 Weiwen Xu, Xin Li, Wai Lam, Lidong Bing

mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages.

Classification Machine Reading Comprehension +3

From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader

1 code implementation9 Dec 2022 Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Wai Lam, Luo Si, Lidong Bing

We present Pre-trained Machine Reader (PMR), a novel method for retrofitting pre-trained masked language models (MLMs) to pre-trained machine reading comprehension (MRC) models without acquiring labeled data.

Classification Extractive Question-Answering +6

PeerDA: Data Augmentation via Modeling Peer Relation for Span Identification Tasks

1 code implementation17 Oct 2022 Weiwen Xu, Xin Li, Yang Deng, Wai Lam, Lidong Bing

Specifically, a novel Peer Data Augmentation (PeerDA) approach is proposed which employs span pairs with the PR relation as the augmentation data for training.

Data Augmentation Relation

Improving Lexical Embeddings for Robust Question Answering

no code implementations28 Feb 2022 Weiwen Xu, Bowei Zou, Wai Lam, Ai Ti Aw

Recent techniques in Question Answering (QA) have gained remarkable performance improvement with some QA models even surpassed human performance.

Question Answering

Exploiting Reasoning Chains for Multi-hop Science Question Answering

1 code implementation Findings (EMNLP) 2021 Weiwen Xu, Yang Deng, Huihui Zhang, Deng Cai, Wai Lam

We propose a novel Chain Guided Retriever-reader ({\tt CGR}) framework to model the reasoning chain for multi-hop Science Question Answering.

Abstract Meaning Representation ARC +1

Dynamic Semantic Graph Construction and Reasoning for Explainable Multi-hop Science Question Answering

1 code implementation Findings (ACL) 2021 Weiwen Xu, Huihui Zhang, Deng Cai, Wai Lam

Our framework contains three new ideas: (a) {\tt AMR-SG}, an AMR-based Semantic Graph, constructed by candidate fact AMRs to uncover any hop relations among question, answer and multiple facts.

Abstract Meaning Representation ARC +6

Addressing the Vulnerability of NMT in Input Perturbations

1 code implementation NAACL 2021 Weiwen Xu, Ai Ti Aw, Yang Ding, Kui Wu, Shafiq Joty

Neural Machine Translation (NMT) has achieved significant breakthrough in performance but is known to suffer vulnerability to input perturbations.

fr-en Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.