1 code implementation • Findings (ACL) 2022 • Qiang Zhang, Jason Naradowsky, Yusuke Miyao
We introduce the task of implicit offensive text detection in dialogues, where a statement may have either an offensive or non-offensive interpretation, depending on the listener and context.
no code implementations • ICML Workshop LaReL 2020 • Takuma Yoneda, Matthew R. Walter, Jason Naradowsky
In this work we perform a controlled study of human language use in a competitive team-based game, and search for useful lessons for structuring communication protocol between autonomous agents.
no code implementations • AMTA 2020 • Jason Naradowsky, Xuan Zhang, Kevin Duh
Adapting machine translation systems in the real world is a difficult problem.
no code implementations • 22 Feb 2020 • Alexander I. Cowen-Rivers, Jason Naradowsky
This provides a visual grounding of the message, similar to an enhanced observation of the world, which may include objects outside of the listening agent's field-of-view.
1 code implementation • 17 Feb 2020 • David Samuel, Aditya Ganeshan, Jason Naradowsky
We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models.
Ranked #16 on
Music Source Separation
on MUSDB18
2 code implementations • ACL 2018 • Lawrence Wolf-Sonkin, Jason Naradowsky, Sabrina J. Mielke, Ryan Cotterell
Statistical morphological inflectors are typically trained on fully supervised, type-level data.
1 code implementation • SEMEVAL 2018 • Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI).
1 code implementation • NAACL 2018 • Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme
We present an empirical study of gender bias in coreference resolution systems.
no code implementations • TACL 2018 • Daniela Gerz, Ivan Vuli{\'c}, Edoardo Ponti, Jason Naradowsky, Roi Reichart, Anna Korhonen
Neural architectures are prominent in the construction of language models (LMs).
no code implementations • EMNLP 2017 • Lucas Sterckx, Jason Naradowsky, Bill Byrne, Thomas Demeester, Chris Develder
Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike.
no code implementations • 30 Oct 2016 • Jason Naradowsky, Sebastian Riedel
In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed.
1 code implementation • ICML 2017 • Matko Bošnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel
Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model.