Search Results for author: Alisa Liu

Found 15 papers, 12 papers with code

That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?

1 code implementation23 Oct 2023 Jaechan Lee, Alisa Liu, Orevaoghene Ahia, Hila Gonen, Noah A. Smith

In experiments, we compare MT-specific models and language models for (i) their preference when given an ambiguous subsentence, (ii) their sensitivity to disambiguating context, and (iii) the performance disparity between figurative and literal source sentences.

Translation

How Language Model Hallucinations Can Snowball

1 code implementation22 May 2023 Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith

A major risk of using language models in practical applications is their tendency to hallucinate incorrect statements.

Language Modelling Question Answering

We're Afraid Language Models Aren't Modeling Ambiguity

1 code implementation27 Apr 2023 Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin Choi

We find that the task remains extremely challenging, including for GPT-4, whose generated disambiguations are considered correct only 32% of the time in human evaluation, compared to 90% for disambiguations in our dataset.

Self-Instruct: Aligning Language Models with Self-Generated Instructions

17 code implementations20 Dec 2022 Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi

Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations.

Instruction Following Language Modelling

Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts

1 code implementation20 Dec 2022 Skyler Hallinan, Alisa Liu, Yejin Choi, Maarten Sap

Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle.

WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation

1 code implementation16 Jan 2022 Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi

Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns.

Natural Language Inference Text Generation

Generated Knowledge Prompting for Commonsense Reasoning

1 code implementation ACL 2022 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.

Language Modelling Open-Ended Question Answering

Bach or Mock? A Grading Function for Chorales in the Style of J.S. Bach

1 code implementation23 Jun 2020 Alexander Fang, Alisa Liu, Prem Seetharaman, Bryan Pardo

Deep generative systems that learn probabilistic models from a corpus of existing music do not explicitly encode knowledge of a musical style, compared to traditional rule-based systems.

Incorporating Music Knowledge in Continual Dataset Augmentation for Music Generation

1 code implementation23 Jun 2020 Alisa Liu, Alexander Fang, Gaëtan Hadjeres, Prem Seetharaman, Bryan Pardo

In this paper, we present augmentative generation (Aug-Gen), a method of dataset augmentation for any music generation system trained on a resource-constrained domain.

Music Generation

Model selection for deep audio source separation via clustering analysis

no code implementations23 Oct 2019 Alisa Liu, Prem Seetharaman, Bryan Pardo

We compare our confidence-based ensemble approach to using individual models with no selection, to an oracle that always selects the best model and to a random model selector.

Audio Source Separation Clustering +1

CODAH: An Adversarially-Authored Question Answering Dataset for Common Sense

1 code implementation WS 2019 Michael Chen, Mike D{'}Arcy, Alisa Liu, Fern, Jared ez, Doug Downey

To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.

Common Sense Reasoning Question Answering +2

CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense

1 code implementation8 Apr 2019 Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, Doug Downey

To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.

 Ranked #1 on Common Sense Reasoning on CODAH (using extra training data)

Common Sense Reasoning Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.