1 code implementation • 2 Jul 2021 • Luisa März, Stefan Schweter, Nina Poerner, Benjamin Roth, Hinrich Schütze
We propose new methods for in-domain and cross-domain Named Entity Recognition (NER) on historical data for Dutch and French.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Nina Poerner, Ulli Waltinger, Hinrich Schütze
Domain adaptation of Pretrained Language Models (PTLMs) is typically achieved by unsupervised pretraining on target-domain text.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Nina Poerner, Ulli Waltinger, Hinrich Schütze
We present a novel way of injecting factual knowledge about entities into the pretrained BERT model (Devlin et al., 2019): We align Wikipedia2Vec entity vectors (Yamada et al., 2016) with BERT's native wordpiece vector space and use the aligned entity vectors as if they were wordpiece vectors.
no code implementations • ACL 2020 • Nina Poerner, Ulli Waltinger, Hinrich Schütze
We address the task of unsupervised Semantic Textual Similarity (STS) by ensembling diverse pre-trained sentence encoders into sentence meta-embeddings.
no code implementations • IJCNLP 2019 • Nina Poerner, Hinrich Sch{\"u}tze
We address the problem of Duplicate Question Detection (DQD) in low-resource domain-specific Community Question Answering forums.
no code implementations • ACL 2019 • Alona Sydorova, Nina Poerner, Benjamin Roth
Our results suggest that IP provides better explanations than LIME or attention, according to both automatic and human evaluation.
no code implementations • 31 Oct 2018 • Nina Poerner, Masoud Jalili Sabet, Benjamin Roth, Hinrich Schütze
Count-based word alignment methods, such as the IBM models or fast-align, struggle on very small parallel corpora.
2 code implementations • WS 2018 • Nina Poerner, Benjamin Roth, Hinrich Schütze
Input optimization methods, such as Google Deep Dream, create interpretable representations of neurons for computer vision DNNs.
1 code implementation • ACL 2018 • Nina Poerner, Hinrich Sch{\"u}tze, Benjamin Roth
The behavior of deep neural networks (DNNs) is hard to understand.
no code implementations • 5 Mar 2018 • Benjamin Roth, Costanza Conforti, Nina Poerner, Sanjeev Karn, Hinrich Schütze
In this work, we introduce the task of Open-Type Relation Argument Extraction (ORAE): Given a corpus, a query entity Q and a knowledge base relation (e. g.,"Q authored notable work with title X"), the model has to extract an argument of non-standard entity type (entities that cannot be extracted by a standard named entity tagger, e. g. X: the title of a book or a work of art) from the corpus.
1 code implementation • 19 Jan 2018 • Nina Poerner, Benjamin Roth, Hinrich Schütze
The behavior of deep neural networks (DNNs) is hard to understand.