no code implementations • 3 Aug 2024 • Qiang Zhang, Jason Naradowsky, Yusuke Miyao
When engaging in conversations, dialogue agents in a virtual simulation environment may exhibit their own emotional states that are unrelated to the immediate conversational context, a phenomenon known as self-emotion.
1 code implementation • 14 Jul 2024 • Shunsuke Kando, Yusuke Miyao, Jason Naradowsky, Shinnosuke Takamichi
This paper proposes a textless method for dependency parsing, examining its effectiveness and limitations.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • 24 Oct 2023 • Qiang Zhang, Jason Naradowsky, Yusuke Miyao
Knowing how to end and resume conversations over time is a natural part of communication, allowing for discussions to span weeks, months, or years.
1 code implementation • 29 May 2023 • Qiang Zhang, Jason Naradowsky, Yusuke Miyao
We propose the "Ask an Expert" framework in which the model is trained with access to an "expert" which it can consult at each turn.
no code implementations • 18 May 2023 • Ryokan Ri, Ryo Ueda, Jason Naradowsky
To develop computational agents that better communicate using their own emergent language, we endow the agents with an ability to focus their attention on particular concepts in the environment.
1 code implementation • Findings (ACL) 2022 • Qiang Zhang, Jason Naradowsky, Yusuke Miyao
We introduce the task of implicit offensive text detection in dialogues, where a statement may have either an offensive or non-offensive interpretation, depending on the listener and context.
no code implementations • ICML Workshop LaReL 2020 • Takuma Yoneda, Matthew R. Walter, Jason Naradowsky
In this work we perform a controlled study of human language use in a competitive team-based game, and search for useful lessons for structuring communication protocol between autonomous agents.
no code implementations • AMTA 2020 • Jason Naradowsky, Xuan Zhang, Kevin Duh
Adapting machine translation systems in the real world is a difficult problem.
no code implementations • 22 Feb 2020 • Alexander I. Cowen-Rivers, Jason Naradowsky
This provides a visual grounding of the message, similar to an enhanced observation of the world, which may include objects outside of the listening agent's field-of-view.
1 code implementation • 17 Feb 2020 • David Samuel, Aditya Ganeshan, Jason Naradowsky
We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models.
Ranked #24 on Music Source Separation on MUSDB18
2 code implementations • ACL 2018 • Lawrence Wolf-Sonkin, Jason Naradowsky, Sabrina J. Mielke, Ryan Cotterell
Statistical morphological inflectors are typically trained on fully supervised, type-level data.
1 code implementation • SEMEVAL 2018 • Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI).
3 code implementations • NAACL 2018 • Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme
We present an empirical study of gender bias in coreference resolution systems.
no code implementations • TACL 2018 • Daniela Gerz, Ivan Vuli{\'c}, Edoardo Ponti, Jason Naradowsky, Roi Reichart, Anna Korhonen
Neural architectures are prominent in the construction of language models (LMs).
no code implementations • EMNLP 2017 • Lucas Sterckx, Jason Naradowsky, Bill Byrne, Thomas Demeester, Chris Develder
Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike.
no code implementations • 30 Oct 2016 • Jason Naradowsky, Sebastian Riedel
In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed.
1 code implementation • ICML 2017 • Matko Bošnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel
Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model.