no code implementations • NAACL (AmericasNLP) 2021 • Marcel Bollmann, Rahul Aralikatte, Héctor Murrieta Bello, Daniel Hershcovich, Miryam de Lhoneux, Anders Søgaard
We evaluated a range of neural machine translation techniques developed specifically for low-resource scenarios.
no code implementations • ACL (WAT) 2021 • Rahul Aralikatte, Héctor Ricardo Murrieta Bello, Miryam de Lhoneux, Daniel Hershcovich, Marcel Bollmann, Anders Søgaard
This work shows that competitive translation results can be obtained in a constrained setting by incorporating the latest advances in memory and compute optimization.
1 code implementation • CoNLL (EMNLP) 2021 • Mareike Hartmann, Miryam de Lhoneux, Daniel Hershcovich, Yova Kementchedjhieva, Lukas Nielsen, Chen Qiu, Anders Søgaard
Negation is one of the most fundamental concepts in human cognition and language, and several natural language inference (NLI) probes have been designed to investigate pretrained language models’ ability to detect and reason with negation.
1 code implementation • 30 Mar 2023 • Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich
The recent release of ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like responses in dialogue.
no code implementations • 20 Feb 2023 • Anders Søgaard, Daniel Hershcovich, Miryam de Lhoneux
Van Miltenburg et al. (2021) suggest NLP research should adopt preregistration to prevent fishing expeditions and to promote publication of negative results.
1 code implementation • 10 May 2022 • Daniel Hershcovich, Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, Markus Leippold
We argue that this deficiency is one of the reasons why very few publications in NLP report key figures that would allow a more thorough examination of environmental impact.
1 code implementation • 3 May 2022 • Stephanie Brandl, Daniel Hershcovich, Anders Søgaard
We argue that we need to evaluate model interpretability methods 'in the wild', i. e., in situations where professionals make critical decisions, and models can potentially assist them.
1 code implementation • NAACL (DADC) 2022 • Ruixiang Cui, Daniel Hershcovich, Anders Søgaard
Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today's NLU models still struggle to capture their semantics.
no code implementations • ACL 2022 • Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, Anders Søgaard
Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages.
no code implementations • CoNLL (EMNLP) 2021 • Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, Anders Søgaard
Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases -- (Paris, Capital, France).
1 code implementation • 7 Aug 2021 • Ruixiang Cui, Rahul Aralikatte, Heather Lent, Daniel Hershcovich
We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese and English.
1 code implementation • ACL (IWPT) 2021 • Ruixiang Cui, Daniel Hershcovich
Broad-coverage meaning representations in NLP mostly focus on explicitly expressed content.
no code implementations • 4 Jun 2021 • Ruixiang Cui, Daniel Hershcovich
We exhibit that the implicit UCCA parser does not address numeric fused-heads (NFHs) consistently, which could result either from inconsistent annotation, insufficient training data or a modelling limitation.
no code implementations • 19 Feb 2021 • Tom Hope, Ronen Tamari, Hyeonsu Kang, Daniel Hershcovich, Joel Chan, Aniket Kittur, Dafna Shahaf
Large repositories of products, patents and scientific papers offer an opportunity for building systems that scour millions of ideas and help users discover inspirations.
no code implementations • 29 Jan 2021 • Mostafa Abdou, Ana Valeria Gonzalez, Mariya Toneva, Daniel Hershcovich, Anders Søgaard
We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms.
no code implementations • COLING 2020 • Omri Abend, Dotan Dvir, Daniel Hershcovich, Jakob Prange, Nathan Schneider
This is an introductory tutorial to UCCA (Universal Conceptual Cognitive Annotation), a cross-linguistically applicable framework for semantic representation, with corpora annotated in English, German and French, and ongoing annotation in Russian and Hebrew.
2 code implementations • COLING 2020 • Daniel Hershcovich, Nathan Schneider, Dotan Dvir, Jakob Prange, Miryam de Lhoneux, Omri Abend
Building robust natural language understanding systems will require a clear characterization of whether and how various linguistic meaning representations complement each other.
no code implementations • CONLL 2020 • Stephan Oepen, Omri Abend, Lasha Abzianidze, Johan Bos, Jan Hajic, Daniel Hershcovich, Bin Li, Tim O{'}Gorman, Nianwen Xue, Daniel Zeman
Extending a similar setup from the previous year, five distinct approaches to the representation of sentence meaning in the form of directed graphs were represented in the English training and evaluation data for the task, packaged in a uniform graph abstraction and serialization; for four of these representation frameworks, additional training and evaluation data was provided for one additional language per framework.
no code implementations • CONLL 2020 • Ofir Arviv, Ruixiang Cui, Daniel Hershcovich
This paper describes the HUJI-KU system submission to the shared task on CrossFramework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task.
1 code implementation • 12 Oct 2020 • Rahul Aralikatte, Mostafa Abdou, Heather Lent, Daniel Hershcovich, Anders Søgaard
Coreference resolution and semantic role labeling are NLP tasks that capture different aspects of semantics, indicating respectively, which expressions refer to the same entity, and what semantic roles expressions serve in the sentence.
no code implementations • 12 Oct 2020 • Ofir Arviv, Ruixiang Cui, Daniel Hershcovich
This paper describes the HUJI-KU system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task.
Ranked #2 on
Semantic Parsing
on DRG (english, MRP 2020)
no code implementations • WS 2020 • Daniel Hershcovich, Miryam de Lhoneux, Artur Kulmizev, Elham Pejhan, Joakim Nivre
We present K{\o}psala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020.
no code implementations • DMR (COLING) 2020 • Ruixiang Cui, Daniel Hershcovich
Predicate-argument structure analysis is a central component in meaning representations of text.
1 code implementation • 25 May 2020 • Daniel Hershcovich, Miryam de Lhoneux, Artur Kulmizev, Elham Pejhan, Joakim Nivre
We present K{\o}psala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020.
2 code implementations • ACL (MWE) 2021 • Nelson F. Liu, Daniel Hershcovich, Michael Kranzlein, Nathan Schneider
In lexical semantics, full-sentence segmentation and segment labeling of various phenomena are generally treated separately, despite their interdependence.
no code implementations • CONLL 2019 • Stephan Oepen, Omri Abend, Jan Hajic, Daniel Hershcovich, Marco Kuhlmann, Tim O{'}Gorman, Nianwen Xue, Jayeol Chun, Milan Straka, Zdenka Uresova
The 2019 Shared Task at the Conference for Computational Language Learning (CoNLL) was devoted to Meaning Representation Parsing (MRP) across frameworks.
no code implementations • CONLL 2019 • Daniel Hershcovich, Ofir Arviv
This paper describes the TUPA system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).
Ranked #2 on
UCCA Parsing
on CoNLL 2019
1 code implementation • IJCNLP 2019 • Rahul Aralikatte, Heather Lent, Ana Valeria Gonzalez, Daniel Hershcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, Anders Søgaard
Unresolved coreference is a bottleneck for relation extraction, and high-quality coreference resolvers may produce an output that makes it a lot easier to extract knowledge triples.
no code implementations • ACL 2019 • Yonatan Bilu, Ariel Gera, Daniel Hershcovich, Benjamin Sznajder, Dan Lahav, Guy Moshkowich, Anael Malet, Assaf Gavron, Noam Slonim
In this work we aim to explicitly define a taxonomy of such principled recurring arguments, and, given a controversial topic, to automatically identify which of these arguments are relevant to the topic.
2 code implementations • NAACL 2019 • Daniel Hershcovich, Omri Abend, Ari Rappoport
Syntactic analysis plays an important role in semantic parsing, but the nature of this role remains a topic of ongoing debate.
1 code implementation • ACL 2019 • Leshem Choshen, Dan Eldad, Daniel Hershcovich, Elior Sulem, Omri Abend
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity.
1 code implementation • WS 2019 • Daniel Hershcovich, Assaf Toledo, Alon Halfon, Noam Slonim
Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly.
1 code implementation • 15 Mar 2019 • Daniel Hershcovich, Omri Abend, Ari Rappoport
Syntactic analysis plays an important role in semantic parsing, but the nature of this role remains a topic of ongoing debate.
no code implementations • SEMEVAL 2019 • Daniel Hershcovich, Zohar Aizenbud, Leshem Choshen, Elior Sulem, Ari Rappoport, Omri Abend
We present the SemEval 2019 shared task on UCCA parsing in English, German and French, and discuss the participating systems and results.
1 code implementation • CONLL 2018 • Daniel Hershcovich, Omri Abend, Ari Rappoport
This paper presents our experiments with applying TUPA to the CoNLL 2018 UD shared task.
no code implementations • 31 May 2018 • Daniel Hershcovich, Leshem Choshen, Elior Sulem, Zohar Aizenbud, Ari Rappoport, Omri Abend
Given the success of recent semantic parsing shared tasks (on SDP and AMR), we expect the task to have a significant contribution to the advancement of UCCA parsing in particular, and semantic parsing in general.
1 code implementation • ACL 2018 • Daniel Hershcovich, Omri Abend, Ari Rappoport
The ability to consolidate information of different types is at the core of intelligence, and has tremendous practical value in allowing learning for one task to benefit from generalizations learned for others.
Ranked #3 on
UCCA Parsing
on SemEval 2019 Task 1
1 code implementation • ACL 2017 • Daniel Hershcovich, Omri Abend, Ari Rappoport
We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation.
Ranked #4 on
UCCA Parsing
on SemEval 2019 Task 1
no code implementations • COLING 2014 • Noam Slonim, Ehud Aharoni, Carlos Alzate, Roy Bar-Haim, Yonatan Bilu, Lena Dankin, Iris Eiron, Daniel Hershcovich, Shay Hummel, Mitesh Khapra, Tamar Lavee, Ran Levy, Paul Matchen, Anatoly Polnarov, Vikas Raykar, Ruty Rinott, Amrita Saha, Naama Zwerdling, David Konopnicki, Dan Gutfreund