Search Results for author: Frank Mtumbuka

Found 4 papers, 2 papers with code

Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction

no code implementations EMNLP 2020 Patrick Hohenecker, Frank Mtumbuka, Vid Kocijan, Thomas Lukasiewicz

The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form {\textless}subject, predicate, object{\textgreater}.

Open Information Extraction Sentence

Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction

no code implementations18 Dec 2023 Frank Mtumbuka, Steven Schockaert

Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM).

Entity Embeddings Language Modelling +6

EnCore: Fine-Grained Entity Typing by Pre-Training Entity Encoders on Coreference Chains

1 code implementation22 May 2023 Frank Mtumbuka, Steven Schockaert

In this paper, we propose to improve on this process by pre-training an entity encoder such that embeddings of coreferring entities are more similar to each other than to the embeddings of other entities.

Entity Embeddings Entity Typing

Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence

1 code implementation Findings (NAACL) 2022 Myeongjun Jang, Frank Mtumbuka, Thomas Lukasiewicz

To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis.

Language Modelling Negation

Cannot find the paper you are looking for? You can Submit a new open access paper.