no code implementations • 7 Jun 2024 • Wichayaporn Wongkamjan, Feng Gu, Yanze Wang, Ulf Hermjakob, Jonathan May, Brandon M. Stewart, Jonathan K. Kummerfeld, Denis Peskoff, Jordan Lee Boyd-Graber
The boardgame Diplomacy is a challenging setting for communicative and cooperative artificial intelligence.
1 code implementation • 19 Apr 2023 • Vesa Akerman, David Baines, Damien Daspit, Ulf Hermjakob, Taeho Jang, Colin Leong, Michael Martin, Joel Mathew, Jonathan Robie, Marcus Schwarting
Efficiently and accurately translating a corpus into a low-resource language remains a challenge, regardless of the strategies employed, whether manual, automated, or a combination of the two.
no code implementations • 1 Feb 2023 • Joel Mathew, Ulf Hermjakob
Technology has increasingly become an integral part of the Bible translation process.
no code implementations • COLING 2018 • Tim O{'}Gorman, Michael Regan, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Martha Palmer
There are few corpora that endeavor to represent the semantic content of entire documents.
no code implementations • ACL 2018 • Ulf Hermjakob, Jonathan May, Michael Pust, Kevin Knight
In a corruption of John Searle{'}s famous AI thought experiment, the Chinese Room (Searle, 1980), we twist its original intent by enabling humans to translate text, e. g. from Uyghur to English, even if they don{'}t have any prior knowledge of the source language.
no code implementations • ACL 2018 • Ulf Hermjakob, Jonathan May, Kevin Knight
We present uroman, a tool for converting text in myriads of languages and scripts such as Chinese, Arabic and Cyrillic into a common Latin-script representation.
1 code implementation • 4 Dec 2015 • Sahil Garg, Aram Galstyan, Ulf Hermjakob, Daniel Marcu
We advance the state of the art in biomolecular interaction extraction with three contributions: (i) We show that deep, Abstract Meaning Representations (AMR) significantly improve the accuracy of a biomolecular interaction extraction system when compared to a baseline that relies solely on surface- and syntax-based features; (ii) In contrast with previous approaches that infer relations on a sentence-by-sentence basis, we expand our framework to enable consistent predictions over sets of sentences (documents); (iii) We further modify and expand a graph kernel learning framework to enable concurrent exploitation of automatically induced AMR (semantic) and dependency structure (syntactic) representations.
no code implementations • 24 Apr 2015 • Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, Jonathan May
To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling.
no code implementations • WS 2013 • Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, Nathan Schneider
Abstract Meaning Representation Prepositional Phrase Attachment +1