no code implementations • LILT 2016 • Ana Marasović, Mengfei Zhou, Alexis Palmer, Anette Frank
Modal verbs have different interpretations depending on their context.
no code implementations • 16 Nov 2023 • Ashim Gupta, Rishanth Rajendhran, Nathan Stringham, Vivek Srikumar, Ana Marasović
We conclude that not only is the question of robustness in NLP as yet unresolved, but even some of the approaches to measure robustness need to be reassessed.
no code implementations • 20 Oct 2023 • Jacob K. Johnson, Ana Marasović
Contrast set consistency is a robustness measurement that evaluates the rate at which a model correctly responds to all instances in a bundle of minimally different examples relying on the same knowledge.
1 code implementation • 1 Nov 2022 • Abhilasha Ravichander, Matt Gardner, Ana Marasović
We also have workers make three kinds of edits to the passage -- paraphrasing the negated statement, changing the scope of the negation, and reversing the negation -- resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts.
no code implementations • 24 Oct 2022 • Alexis Ross, Matthew E. Peters, Ana Marasović
Specifically, we evaluate how training self-rationalization models with free-text rationales affects robustness to spurious correlations in fine-tuned encoder-decoder and decoder-only models of six different sizes.
no code implementations • 13 Sep 2022 • Jack Hessel, Ana Marasović, Jena D. Hwang, Lillian Lee, Jeff Da, Rowan Zellers, Robert Mankoff, Yejin Choi
Large neural networks can now generate jokes, but do they really "understand" humor?
no code implementations • 24 May 2022 • Shruti Palaskar, Akshita Bhagia, Yonatan Bisk, Florian Metze, Alan W Black, Ana Marasović
Combining the visual modality with pretrained language models has been surprisingly effective for simple descriptive tasks such as image captioning.
1 code implementation • Findings (NAACL) 2022 • Ana Marasović, Iz Beltagy, Doug Downey, Matthew E. Peters
We identify the right prompting approach by extensively exploring natural language prompts on FEB. Then, by using this prompt and scaling the model size, we demonstrate that making progress on few-shot self-rationalization is possible.
1 code implementation • Findings (ACL) 2021 • Kaiser Sun, Ana Marasović
An attention matrix of a transformer self-attention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output.
no code implementations • EMNLP 2021 • Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, Matt Gardner
Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.
no code implementations • 24 Feb 2021 • Sarah Wiegreffe, Ana Marasović
Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated textual explanations.
no code implementations • Findings (ACL) 2021 • Alexander Hoyle, Ana Marasović, Noah Smith
Generating text from structured inputs, such as meaning representations or RDF triples, has often involved the use of specialized graph-encoding neural networks.
1 code implementation • Findings (ACL) 2021 • Alexis Ross, Ana Marasović, Matthew E. Peters
Humans have been shown to give contrastive explanations, which explain why an observed event happened rather than some other counterfactual event (the contrast case).
1 code implementation • EMNLP 2021 • Sarah Wiegreffe, Ana Marasović, Noah A. Smith
In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
no code implementations • 15 Oct 2020 • Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i. e., trust between people).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi
Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.
5 code implementations • ACL 2020 • Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, Noah A. Smith
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP.
1 code implementation • IJCNLP 2019 • Pradeep Dasigi, Nelson F. Liu, Ana Marasović, Noah A. Smith, Matt Gardner
Machine comprehension of texts longer than a single sentence often requires coreference resolution.
1 code implementation • NAACL 2018 • Ana Marasović, Anette Frank
For over a decade, machine learning has been used to extract opinion-holder-target structures from text to answer the question "Who expressed what kind of sentiment towards what?".
Ranked #2 on
Fine-Grained Opinion Analysis
on MPQA
(using extra training data)
1 code implementation • EMNLP 2017 • Ana Marasović, Leo Born, Juri Opitz, Anette Frank
We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors.
Ranked #1 on
Abstract Anaphora Resolution
on The ARRAU Corpus
no code implementations • WS 2016 • Ana Marasović, Anette Frank
Modal sense classification (MSC) is a special WSD task that depends on the meaning of the proposition in the modal's scope.