Search Results for author: Skyler Hallinan

Found 12 papers, 10 papers with code

Misinfo Reaction Frames: Reasoning about Readers’ Reactions to News Headlines

3 code implementations ACL 2022 Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, Yejin Choi

Even to a simple and short news headline, readers react in a multitude of ways: cognitively (e. g. inferring the writer’s intent), emotionally (e. g. feeling distrust), and behaviorally (e. g. sharing the news with their friends).

Misinformation

Prismatic Synthesis: Gradient-based Data Diversification Boosts Generalization in LLM Reasoning

no code implementations26 May 2025 JaeHun Jung, Seungju Han, Ximing Lu, Skyler Hallinan, David Acuna, Shrimai Prabhumoye, Mostafa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Yejin Choi

This motivates us to ask: What kind of diversity in training data actually drives generalization in language models -- and how can we measure and amplify it?

Diversity Math +1

Amulet: Putting Complex Multi-Turn Conversations on the Stand with LLM Juries

no code implementations26 May 2025 Sahana Ramnath, Anurag Mudgil, Brihi Joshi, Skyler Hallinan, Xiang Ren

Today, large language models are widely used as judges to evaluate responses from other language models.

StyleRemix: Interpretable Authorship Obfuscation via Distillation and Perturbation of Style Elements

1 code implementation28 Aug 2024 Jillian Fisher, Skyler Hallinan, Ximing Lu, Mitchell Gordon, Zaid Harchaoui, Yejin Choi

Authorship obfuscation, rewriting a text to intentionally obscure the identity of the author, is an important but challenging task.

STEER: Unified Style Transfer with Expert Reinforcement

1 code implementation13 Nov 2023 Skyler Hallinan, Faeze Brahman, Ximing Lu, JaeHun Jung, Sean Welleck, Yejin Choi

We propose STEER: Unified Style Transfer with Expert Reinforcement, a unified frame-work developed to overcome the challenge of limited parallel data for style transfer.

Style Transfer Text Style Transfer

Tailoring Self-Rationalizers with Multi-Reward Distillation

1 code implementation6 Nov 2023 Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren

Results on five difficult question-answering datasets StrategyQA, QuaRel, OpenBookQA, NumerSense and QASC show that not only does MaRio improve task accuracy, but it also improves the self-rationalization quality of small LMs across the aforementioned axes better than a supervised fine-tuning (SFT) baseline.

Diversity Question Answering +1

Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning

1 code implementation24 May 2023 Ximing Lu, Faeze Brahman, Peter West, Jaehun Jang, Khyathi Chandu, Abhilasha Ravichander, Lianhui Qin, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Yuchen Lin, Skyler Hallinan, Xiang Ren, Sean Welleck, Yejin Choi

While extreme-scale language models have demonstrated exceptional performance on a variety of language tasks, the degree of control over these language models through pure prompting can often be limited.

Language Modeling Language Modelling +2

Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts

1 code implementation20 Dec 2022 Skyler Hallinan, Alisa Liu, Yejin Choi, Maarten Sap

Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle.

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

1 code implementation6 Oct 2022 Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi

Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of commonsense knowledge elicited from GPT-3.

Question Answering Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.