Search Results for author: Mengzhou Xia

Found 31 papers, 21 papers with code

MMTEB: Massive Multilingual Text Embedding Benchmark

1 code implementation19 Feb 2025 Kenneth Enevoldsen, Isaac Chung, Imene Kerboua, Márton Kardos, Ashwin Mathur, David Stap, Jay Gala, Wissam Siblini, Dominik Krzemiński, Genta Indra Winata, Saba Sturua, Saiteja Utpala, Mathieu Ciancone, Marion Schaeffer, Gabriel Sequeira, Diganta Misra, Shreeya Dhakal, Jonathan Rystrøm, Roman Solomatin, Ömer Çağatan, Akash Kundu, Martin Bernstorff, Shitao Xiao, Akshita Sukhlecha, Bhavish Pahwa, Rafał Poświata, Kranthi Kiran GV, Shawon Ashraf, Daniel Auras, Björn Plüster, Jan Philipp Harries, Loïc Magne, Isabelle Mohr, Mariya Hendriksen, Dawei Zhu, Hippolyte Gisserot-Boukhlef, Tom Aarsen, Jan Kostkan, Konrad Wojtasik, Taemin Lee, Marek Šuppa, Crystina Zhang, Roberta Rocca, Mohammed Hamdy, Andrianos Michail, John Yang, Manuel Faysse, Aleksei Vatolin, Nandan Thakur, Manan Dey, Dipam Vasani, Pranjal Chitale, Simone Tedeschi, Nguyen Tai, Artem Snegirev, Michael Günther, Mengzhou Xia, Weijia Shi, Xing Han Lù, Jordan Clive, Gayatri Krishnakumar, Anna Maksimova, Silvan Wehrli, Maria Tikhonova, Henil Panchal, Aleksandr Abramov, Malte Ostendorff, Zheng Liu, Simon Clematide, Lester James Miranda, Alena Fenogenova, Guangyu Song, Ruqiya Bin Safi, Wen-Ding Li, Alessia Borghini, Federico Cassano, Hongjin Su, Jimmy Lin, Howard Yen, Lasse Hansen, Sara Hooker, Chenghao Xiao, Vaibhav Adlakha, Orion Weller, Siva Reddy, Niklas Muennighoff

MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date.

Instruction Following Retrieval

Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models Beneficial?

no code implementations2 Feb 2025 Wenzhe Li, Yong Lin, Mengzhou Xia, Chi Jin

We confirm that the MoA performance is rather sensitive to the quality, and mixing different LLMs often lowers the average quality of the models.

Math MMLU

ICONS: Influence Consensus for Vision-Language Data Selection

no code implementations31 Dec 2024 Xindi Wu, Mengzhou Xia, Rulin Shao, Zhiwei Deng, Pang Wei Koh, Olga Russakovsky

In this work, we introduce ICONS, a gradient-driven Influence CONsensus approach for vision-language data Selection that selects a compact training dataset for efficient multi-task training.

Language Modeling Language Modelling

BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval

no code implementations16 Jul 2024 Hongjin Su, Howard Yen, Mengzhou Xia, Weijia Shi, Niklas Muennighoff, Han-yu Wang, Haisu Liu, Quan Shi, Zachary S. Siegel, Michael Tang, Ruoxi Sun, Jinsung Yoon, Sercan O. Arik, Danqi Chen, Tao Yu

To better benchmark retrieval on such challenging queries, we introduce BRIGHT, the first text retrieval benchmark that requires intensive reasoning to retrieve relevant documents.

Question Answering Text Retrieval

LitSearch: A Retrieval Benchmark for Scientific Literature Search

1 code implementation10 Jul 2024 Anirudh Ajith, Mengzhou Xia, Alexis Chevalier, Tanya Goyal, Danqi Chen, Tianyu Gao

LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and (2) questions manually written by authors about their recently published papers.

Retrieval

CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs

1 code implementation26 Jun 2024 ZiRui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, Danqi Chen

All models lag far behind human performance of 80. 5%, underscoring weaknesses in the chart understanding capabilities of existing MLLMs.

Chart Understanding

SimPO: Simple Preference Optimization with a Reference-Free Reward

2 code implementations23 May 2024 Yu Meng, Mengzhou Xia, Danqi Chen

Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability.

Chatbot Instruction Following

Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training

no code implementations6 May 2024 Zexuan Zhong, Mengzhou Xia, Danqi Chen, Mike Lewis

Mixture-of-experts (MoE) models facilitate efficient scaling; however, training the router network introduces the challenge of optimizing a non-differentiable, discrete objective.

Language Modeling Language Modelling

What is in Your Safe Data? Identifying Benign Data that Breaks Safety

1 code implementation1 Apr 2024 Luxi He, Mengzhou Xia, Peter Henderson

Training on just 100 of these seemingly benign datapoints surprisingly leads to the fine-tuned model affirmatively responding to >70% of tested harmful requests, compared to <20% after fine-tuning on randomly selected data.

Math

Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications

no code implementations7 Feb 2024 Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, Peter Henderson

We develop methods to identify critical regions that are vital for safety guardrails, and that are disentangled from utility-relevant regions at both the neuron and rank levels.

Safety Alignment

LESS: Selecting Influential Data for Targeted Instruction Tuning

3 code implementations6 Feb 2024 Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, Danqi Chen

Instruction tuning has unlocked powerful capabilities in large language models (LLMs), effectively using combined datasets to develop generalpurpose chatbots.

Detecting Pretraining Data from Large Language Models

1 code implementation25 Oct 2023 Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer

Min-K% Prob can be applied without any knowledge about the pretraining corpus or any additional training, departing from previous detection methods that require training a reference model on data that is similar to the pretraining data.

Machine Unlearning

Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning

2 code implementations10 Oct 2023 Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen

In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models.

Language Modeling Language Modelling +2

Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation

2 code implementations10 Oct 2023 Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, Danqi Chen

Finally, we propose an effective alignment method that explores diverse generation strategies, which can reasonably reduce the misalignment rate under our attack.

Red Teaming

Trainable Transformer in Transformer

1 code implementation3 Jul 2023 Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia, Sanjeev Arora

In this work, we propose an efficient construction, Transformer in Transformer (in short, TinT), that allows a transformer to simulate and fine-tune complex models internally during inference (e. g., pre-trained language models).

Attribute In-Context Learning +2

InstructEval: Systematic Evaluation of Instruction Selection Methods

no code implementations1 Jul 2023 Anirudh Ajith, Chris Pan, Mengzhou Xia, Ameet Deshpande, Karthik Narasimhan

In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations.

Benchmarking In-Context Learning +2

Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models

no code implementations26 Oct 2022 Mozes van de Kar, Mengzhou Xia, Danqi Chen, Mikel Artetxe

Our results suggest that the success of prompting can partly be explained by the model being exposed to similar examples during pretraining, which can be directly retrieved through regular expressions.

Text Classification Text Infilling +2

MABEL: Attenuating Gender Bias using Textual Entailment Data

2 code implementations26 Oct 2022 Jacqueline He, Mengzhou Xia, Christiane Fellbaum, Danqi Chen

To this end, we propose MABEL (a Method for Attenuating Gender Bias using Entailment Labels), an intermediate pre-training approach for mitigating gender bias in contextualized representations.

Contrastive Learning Fairness +1

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models

1 code implementation30 May 2022 Mengzhou Xia, Mikel Artetxe, Jingfei Du, Danqi Chen, Ves Stoyanov

In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked language models in a wide range of tasks.

Few-Shot Learning Text Infilling

Structured Pruning Learns Compact and Accurate Models

2 code implementations ACL 2022 Mengzhou Xia, Zexuan Zhong, Danqi Chen

The growing size of neural language models has led to increased attention in model compression.

Model Compression

Non-Parametric Few-Shot Learning for Word Sense Disambiguation

1 code implementation NAACL 2021 Howard Chen, Mengzhou Xia, Danqi Chen

One significant challenge in supervised all-words WSD is to classify among senses for a majority of words that lie in the long-tail distribution.

Few-Shot Learning Word Sense Disambiguation

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning

2 code implementations NAACL 2021 Mengzhou Xia, Guoqing Zheng, Subhabrata Mukherjee, Milad Shokouhi, Graham Neubig, Ahmed Hassan Awadallah

Extensive experiments on real-world low-resource languages - without access to large-scale monolingual corpora or large amounts of labeled data - for tasks like cross-lingual sentiment analysis and named entity recognition show the effectiveness of our approach.

Cross-Lingual Transfer Meta-Learning +5

Demoting Racial Bias in Hate Speech Detection

no code implementations WS 2020 Mengzhou Xia, Anjalie Field, Yulia Tsvetkov

In current hate speech datasets, there exists a high correlation between annotators' perceptions of toxicity and signals of African American English (AAE).

Hate Speech Detection

Predicting Performance for Natural Language Processing Tasks

1 code implementation ACL 2020 Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, Graham Neubig

Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting.

Generalized Data Augmentation for Low-Resource Translation

no code implementations ACL 2019 Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig

Translation to or from low-resource languages LRLs poses challenges for machine translation in terms of both adequacy and fluency.

Data Augmentation Translation +1

Choosing Transfer Languages for Cross-Lingual Learning

1 code implementation ACL 2019 Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, Graham Neubig

Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages.

Cross-Lingual Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.