Search Results for author: Xiaochuang Han

Found 22 papers, 12 papers with code

When One LLM Drools, Multi-LLM Collaboration Rules

no code implementations6 Feb 2025 Shangbin Feng, Wenxuan Ding, Alisa Liu, Zifeng Wang, Weijia Shi, Yike Wang, Zejiang Shen, Xiaochuang Han, Hunter Lang, Chen-Yu Lee, Tomas Pfister, Yejin Choi, Yulia Tsvetkov

This position paper argues that in many realistic (i. e., complex, contextualized, subjective) scenarios, one LLM is not enough to produce a reliable output.

Diversity

JPEG-LM: LLMs as Image Generators with Canonical Codec Representations

no code implementations15 Aug 2024 Xiaochuang Han, Marjan Ghazvininejad, Pang Wei Koh, Yulia Tsvetkov

Evaluation of image generation shows that this simple and straightforward approach is more effective than pixel-based modeling and sophisticated vector quantization baselines (on which our method yields a 31% reduction in FID).

Image Generation Quantization +2

Can LLM Graph Reasoning Generalize beyond Pattern Memorization?

1 code implementation23 Jun 2024 Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, Yulia Tsvetkov

To this end, we propose the NLGift benchmark, an evaluation suite of LLM graph reasoning generalization: whether LLMs could go beyond semantic, numeric, structural, reasoning patterns in the synthetic training data and improve utility on real-world graph-based tasks.

Memorization

Tuning Language Models by Proxy

2 code implementations16 Jan 2024 Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah A. Smith

Despite the general capabilities of large pretrained language models, they consistently benefit from further adaptation to better achieve desired behaviors.

Domain Adaptation Math +2

P^3SUM: Preserving Author's Perspective in News Summarization with Diffusion Language Models

1 code implementation16 Nov 2023 YuHan Liu, Shangbin Feng, Xiaochuang Han, Vidhisha Balachandran, Chan Young Park, Sachin Kumar, Yulia Tsvetkov

In this work, we take a first step towards designing summarization systems that are faithful to the author's intent, not only the semantic content of the article.

News Summarization

On the Zero-Shot Generalization of Machine-Generated Text Detectors

no code implementations8 Oct 2023 Xiao Pu, Jingyu Zhang, Xiaochuang Han, Yulia Tsvetkov, Tianxing He

The rampant proliferation of large language models, fluent enough to generate text indistinguishable from human-written language, gives unprecedented importance to the detection of machine-generated text.

Zero-shot Generalization

Understanding In-Context Learning via Supportive Pretraining Data

no code implementations26 Jun 2023 Xiaochuang Han, Daniel Simig, Todor Mihaylov, Yulia Tsvetkov, Asli Celikyilmaz, Tianlu Wang

We observe that a continued pretraining on this small subset significantly improves the model's ICL ability, by up to 18%.

In-Context Learning

Trusting Your Evidence: Hallucinate Less with Context-aware Decoding

3 code implementations24 May 2023 Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih

Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations.

David helps Goliath: Inference-Time Collaboration Between Small Specialized and Large General Diffusion LMs

no code implementations24 May 2023 Xiaochuang Han, Sachin Kumar, Yulia Tsvetkov, Marjan Ghazvininejad

Diffusion-based language models are emerging as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced controllability at inference time.

Can Language Models Solve Graph Problems in Natural Language?

2 code implementations NeurIPS 2023 Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov

We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.

In-Context Learning Knowledge Probing +2

Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?

no code implementations20 Dec 2022 Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer

Large language models can perform new tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior.

SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control

2 code implementations31 Oct 2022 Xiaochuang Han, Sachin Kumar, Yulia Tsvetkov

Despite the growing success of diffusion models in continuous-valued domains (e. g., images), similar efforts for discrete domains such as text have yet to match the performance of autoregressive language models.

Diversity Language Modeling +2

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates

1 code implementation Findings (EMNLP) 2021 Xiaochuang Han, Yulia Tsvetkov

Among the most critical limitations of deep learning NLP models are their lack of interpretability, and their reliance on spurious correlations.

Fortifying Toxic Speech Detectors Against Veiled Toxicity

1 code implementation EMNLP 2020 Xiaochuang Han, Yulia Tsvetkov

Modern toxic speech detectors are incompetent in recognizing disguised offensive language, such as adversarial attacks that deliberately avoid known toxic lexicons, or manifestations of implicit bias.

Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions

1 code implementation ACL 2020 Xiaochuang Han, Byron C. Wallace, Yulia Tsvetkov

In this work, we investigate the use of influence functions for NLP, providing an alternative approach to interpreting neural text classifiers.

Natural Language Inference

No Permanent Friends or Enemies: Tracking Relationships between Nations from News

1 code implementation NAACL 2019 Xiaochuang Han, Eunsol Choi, Chenhao Tan

Understanding the dynamics of international politics is important yet challenging for civilians.

Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling

1 code implementation IJCNLP 2019 Xiaochuang Han, Jacob Eisenstein

To address this scenario, we propose domain-adaptive fine-tuning, in which the contextualized embeddings are adapted by masked language modeling on text from the target domain.

Language Modeling Language Modelling +4

Interactional Stancetaking in Online Forums

no code implementations CL 2018 Scott F. Kiesling, Umashanthi Pavalanathan, Jim Fitzpatrick, Xiaochuang Han, Jacob Eisenstein

Theories of interactional stancetaking have been put forward as holistic accounts, but until now, these theories have been applied only through detailed qualitative analysis of (portions of) a few individual conversations.

Cannot find the paper you are looking for? You can Submit a new open access paper.