Search Results for author: Samuel Broscheit

Found 14 papers, 7 papers with code

You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings

2 code implementations ICLR 2020 Daniel Ruffinelli, Samuel Broscheit, Rainer Gemulla

A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance.

Hyperparameter Optimization Knowledge Graph Embedding +2

LibKGE - A knowledge graph embedding library for reproducible research

1 code implementation EMNLP 2020 Samuel Broscheit, Daniel Ruffinelli, Adrian Kochsiek, Patrick Betz, Rainer Gemulla

LibKGE ( https://github. com/uma-pi1/kge ) is an open-source PyTorch-based library for training, hyperparameter optimization, and evaluation of knowledge graph embedding models for link prediction.

Hyperparameter Optimization Knowledge Graph Embedding +1

The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus

2 code implementations18 Dec 2021 Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel

In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise.

Common Sense Reasoning Retrieval

Improving Wikipedia Verifiability with AI

1 code implementation8 Jul 2022 Fabio Petroni, Samuel Broscheit, Aleksandra Piktus, Patrick Lewis, Gautier Izacard, Lucas Hosseini, Jane Dwivedi-Yu, Maria Lomeli, Timo Schick, Pierre-Emmanuel Mazaré, Armand Joulin, Edouard Grave, Sebastian Riedel

Hence, maintaining and improving the quality of Wikipedia references is an important challenge and there is a pressing need for better tools to assist humans in this effort.

Citation Recommendation Fact Checking

Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking

1 code implementation CONLL 2019 Samuel Broscheit

We show on an entity linking benchmark that (i) this model improves the entity representations over plain BERT, (ii) that it outperforms entity linking architectures that optimize the tasks separately and (iii) that it only comes second to the current state-of-the-art that does mention detection and entity disambiguation jointly.

Ranked #11 on Entity Linking on AIDA-CoNLL (using extra training data)

Entity Disambiguation Entity Linking +5

Can We Predict New Facts with Open Knowledge Graph Embeddings? A Benchmark for Open Link Prediction

1 code implementation ACL 2020 Samuel Broscheit, Kiril Gashteovski, Yanjie Wang, Rainer Gemulla

An evaluation in such a setup raises the question if a correct prediction is actually a new fact that was induced by reasoning over the open knowledge graph or if it can be trivially explained.

Knowledge Graph Embeddings Link Prediction +3

On Evaluating Embedding Models for Knowledge Base Completion

no code implementations WS 2019 Yanjie Wang, Daniel Ruffinelli, Rainer Gemulla, Samuel Broscheit, Christian Meilicke

In this paper, we explore whether recent models work well for knowledge base completion and argue that the current evaluation protocols are more suited for question answering rather than knowledge base completion.

Knowledge Base Completion Question Answering

Learning Distributional Token Representations from Visual Features

no code implementations WS 2018 Samuel Broscheit

In summary, we show that it is possible to achieve a \textit{text representation} only from pixels.

Machine Translation NMT +5

A Relational Tucker Decomposition for Multi-Relational Link Prediction

no code implementations3 Feb 2019 Yanjie Wang, Samuel Broscheit, Rainer Gemulla

We propose the Relational Tucker3 (RT) decomposition for multi-relational link prediction in knowledge graphs.

Knowledge Graph Embedding Knowledge Graphs +1

Cannot find the paper you are looking for? You can Submit a new open access paper.