Search Results for author: Zexuan Zhong

Found 12 papers, 9 papers with code

Privacy Implications of Retrieval-Based Language Models

1 code implementation24 May 2023 Yangsibo Huang, Samyak Gupta, Zexuan Zhong, Kai Li, Danqi Chen

Crucially, we find that $k$NN-LMs are more susceptible to leaking private information from their private datastore than parametric models.


MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions

1 code implementation24 May 2023 Zexuan Zhong, Zhengxuan Wu, Christopher D. Manning, Christopher Potts, Danqi Chen

The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option.

Language Modelling Multi-hop Question Answering +1

Training Language Models with Memory Augmentation

1 code implementation25 May 2022 Zexuan Zhong, Tao Lei, Danqi Chen

Recent work has improved language models (LMs) remarkably by equipping them with a non-parametric memory component.

Language Modelling Machine Translation

Recovering Private Text in Federated Learning of Language Models

1 code implementation17 May 2022 Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, Danqi Chen

For the first time, we show the feasibility of recovering text from large batch sizes of up to 128 sentences.

Federated Learning Word Embeddings

Structured Pruning Learns Compact and Accurate Models

1 code implementation ACL 2022 Mengzhou Xia, Zexuan Zhong, Danqi Chen

The growing size of neural language models has led to increased attention in model compression.

Model Compression

Simple Entity-Centric Questions Challenge Dense Retrievers

1 code implementation EMNLP 2021 Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, Danqi Chen

Open-domain question answering has exploded in popularity recently due to the success of dense retrieval models, which have surpassed sparse models using only a few supervised training examples.

Data Augmentation Open-Domain Question Answering +2

Factual Probing Is [MASK]: Learning vs. Learning to Recall

2 code implementations NAACL 2021 Zexuan Zhong, Dan Friedman, Danqi Chen

Petroni et al. (2019) demonstrated that it is possible to retrieve world facts from a pre-trained language model by expressing them as cloze-style prompts and interpret the model's prediction accuracy as a lower bound on the amount of factual information it encodes.

Language Modelling

A Frustratingly Easy Approach for Entity and Relation Extraction

2 code implementations NAACL 2021 Zexuan Zhong, Danqi Chen

Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model.

Joint Entity and Relation Extraction Multi-Task Learning +2

MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks

no code implementations31 Aug 2018 Siwakorn Srisakaokul, Yuhao Zhang, Zexuan Zhong, Wei Yang, Tao Xie, Bo Li

In particular, given a target model, our framework includes multiple models (constructed from the target model) to form a model family.

Cannot find the paper you are looking for? You can Submit a new open access paper.