Search Results for author: Jena D. Hwang

Found 43 papers, 14 papers with code

Sprucing up Supersenses: Untangling the Semantic Clusters of Accompaniment and Purpose

no code implementations COLING (LAW) 2020 Jena D. Hwang, Nathan Schneider, Vivek Srikumar

We reevaluate an existing adpositional annotation scheme with respect to two thorny semantic domains: accompaniment and purpose.

K-SNACS: Annotating Korean Adposition Semantics

no code implementations DMR (COLING) 2020 Jena D. Hwang, Hanwool Choe, Na-Rae Han, Nathan Schneider

While many languages use adpositions to encode semantic relationships between content words in a sentence (e. g., agentivity or temporality), the details of how adpositions work vary widely across languages with respect to both form and meaning.

Putting Context in SNACS: A 5-Way Classification of Adpositional Pragmatic Markers

no code implementations LREC (LAW) 2022 Yang Janet Liu, Jena D. Hwang, Nathan Schneider, Vivek Srikumar

The SNACS framework provides a network of semantic labels called supersenses for annotating adpositional semantics in corpora.

SPLAIN: Augmenting CybersecurityWarnings with Reasons and Data

no code implementations19 Nov 2023 Vera A. Kazakova, Jena D. Hwang, Bonnie J. Dorr, Yorick Wilks, J. Blake Gage, Alex Memory, Mark A. Clark

Effective cyber threat recognition and prevention demand comprehensible forecasting systems, as prior approaches commonly offer limited and, ultimately, unconvincing information.

The Generative AI Paradox: "What It Can Create, It May Not Understand"

no code implementations31 Oct 2023 Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

Specifically, we propose and test the Generative AI Paradox hypothesis: generative models, having been trained directly to reproduce expert-like outputs, acquire generative capabilities that are not contingent upon -- and can therefore exceed -- their ability to understand those same types of outputs.

"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation

no code implementations26 Oct 2023 Allyson Ettinger, Jena D. Hwang, Valentina Pyatkin, Chandra Bhagavatula, Yejin Choi

We compare models' analysis of this semantic structure across two settings: 1) direct production of AMR parses based on zero- and few-shot prompts, and 2) indirect partial reconstruction of AMR via metalinguistic natural language queries (e. g., "Identify the primary event of this sentence, and the predicate corresponding to that event.").

Natural Language Queries

Cultural and Linguistic Diversity Improves Visual Representations

no code implementations22 Oct 2023 Andre Ye, Sebastin Santy, Jena D. Hwang, Amy X. Zhang, Ranjay Krishna

For instance, image descriptions in different languages are typically assumed to be translations of the same semantic content.

Graph Embedding

COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements

no code implementations3 Jun 2023 Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, Maarten Sap

To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context.

Faith and Fate: Limits of Transformers on Compositionality

1 code implementation29 May 2023 Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures.

Causal schema induction for knowledge discovery

1 code implementation27 Mar 2023 Michael Regan, Jena D. Hwang, Keisuke Sakaguchi, James Pustejovsky

In this work, we investigate how to apply schema induction models to the task of knowledge discovery for enhanced search of English-language news texts.

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation

no code implementations19 Dec 2022 Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Lianhui Qin, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi

Here, we investigate an alternative that a priori seems impossible: can smaller language models (e. g., GPT-2) win over models that are orders of magnitude larger and better (e. g., GPT-3), if powered with novel commonsense distillation algorithms?

Imitation Learning Knowledge Distillation

ComFact: A Benchmark for Linking Contextual Commonsense Knowledge

1 code implementation23 Oct 2022 Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut

Understanding rich narratives, such as dialogues and stories, often requires natural language processing systems to access relevant knowledge from commonsense knowledge graphs.

Knowledge Graphs Response Generation +1

Penguins Don't Fly: Reasoning about Generics through Instantiations and Exceptions

no code implementations23 May 2022 Emily Allaway, Jena D. Hwang, Chandra Bhagavatula, Kathleen McKeown, Doug Downey, Yejin Choi

Generics express generalizations about the world (e. g., birds can fly) that are not universally true (e. g., newborn birds and penguins cannot fly).

Natural Language Inference

The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning

no code implementations10 Feb 2022 Jack Hessel, Jena D. Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, Yejin Choi

We present Sherlock, an annotated corpus of 103K images for testing machine capacity for abductive reasoning beyond literal image contents.

Visual Abductive Reasoning Visual Reasoning

On-the-Fly Attention Modulation for Neural Generation

no code implementations Findings (ACL) 2021 Yue Dong, Chandra Bhagavatula, Ximing Lu, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi

Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, self-contradictory, and often lacks commonsense.

Language Modelling Text Generation

Social Chemistry 101: Learning to Reason about Social and Moral Norms

1 code implementation EMNLP 2020 Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi

We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

3 code implementations12 Oct 2020 Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi

Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events.

Knowledge Graphs Natural Language Understanding

Analysis of the Penn Korean Universal Dependency Treebank (PKT-UD): Manual Revision to Build Robust Parsing Model in Korean

no code implementations WS 2020 Tae Hwan Oh, Ji Yoon Han, Hyonsu Choe, Seokwon Park, Han He, Jinho D. Choi, Na-Rae Han, Jena D. Hwang, Hansaem Kim

In this paper, we first open on important issues regarding the Penn Korean Universal Treebank (PKT-UD) and address these issues by revising the entire corpus manually with the aim of producing cleaner UD annotations that are more faithful to Korean grammar.

Preparing SNACS for Subjects and Objects

1 code implementation WS 2019 Adi Shalev, Jena D. Hwang, Nathan Schneider, Vivek Srikumar, Omri Abend, Ari Rappoport

Research on adpositions and possessives in multiple languages has led to a small inventory of general-purpose meaning classes that disambiguate tokens.

Coordinate Structures in Universal Dependencies for Head-final Languages

no code implementations WS 2018 Hiroshi Kanayama, Na-Rae Han, Masayuki Asahara, Jena D. Hwang, Yusuke Miyao, Jinho D. Choi, Yuji Matsumoto

This paper discusses the representation of coordinate structures in the Universal Dependencies framework for two head-final languages, Japanese and Korean.

Comprehensive Supersense Disambiguation of English Prepositions and Possessives

1 code implementation ACL 2018 Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R. Moeller, Aviram Stern, Adi Bitan, Omri Abend

Semantic relations are often signaled with prepositional or possessive marking--but extreme polysemy bedevils their analysis and automatic interpretation.

Double Trouble: The Problem of Construal in Semantic Annotation of Adpositions

no code implementations SEMEVAL 2017 Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O{'}Gorman, Vivek Srikumar, Nathan Schneider

We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4, 250 preposition tokens in a 55, 000 word corpus of English.

Adposition and Case Supersenses v2.6: Guidelines for English

4 code implementations7 Apr 2017 Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Archna Bhatia, Na-Rae Han, Tim O'Gorman, Sarah R. Moeller, Omri Abend, Adi Shalev, Austin Blodgett, Jakob Prange

This document offers a detailed linguistic description of SNACS (Semantic Network of Adposition and Case Supersenses; Schneider et al., 2018), an inventory of 52 semantic labels ("supersenses") that characterize the use of adpositions and case markers at a somewhat coarse level of granularity, as demonstrated in the STREUSLE corpus (https://github. com/nert-nlp/streusle/ ; version 4. 5 tracks guidelines version 2. 6).

Coping with Construals in Broad-Coverage Semantic Annotation of Adpositions

no code implementations10 Mar 2017 Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O'Gorman, Vivek Srikumar, Nathan Schneider

We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4, 250 preposition tokens in a 55, 000 word corpus of English.

A corpus of preposition supersenses in English web reviews

no code implementations8 May 2016 Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Meredith Green, Kathryn Conger, Tim O'Gorman, Martha Palmer

We present the first corpus annotated with preposition supersenses, unlexicalized categories for semantic functions that can be marked by English prepositions (Schneider et al., 2015).

Criteria for Identifying and Annotating Caused Motion Constructions in Corpus Data

no code implementations LREC 2014 Jena D. Hwang, Annie Zaenen, Martha Palmer

While natural language processing performance has been improved through the recognition that there is a relationship between the semantics of the verb and the syntactic context in which the verb is realized, sentences where the verb does not conform to the expected syntax-semantic patterning behavior remain problematic.

PropBank: Semantics of New Predicate Types

no code implementations LREC 2014 Claire Bonial, Julia Bonn, Kathryn Conger, Jena D. Hwang, Martha Palmer

This research focuses on expanding PropBank, a corpus annotated with predicate argument structures, with new predicate types; namely, noun, adjective and complex predicates, such as Light Verb Constructions.

Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.