Cross-Lingual Entity Linking
9 papers with code • 0 benchmarks • 1 datasets
Cross-lingual entity linking is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve entity linking tasks (i.e., assigning a unique identity to entities in a text) in another, commonly low-resource, language.
Image Source: Towards Zero-resource Cross-lingual Entity Linking
These leaderboards are used to track progress in Cross-Lingual Entity Linking
This enables our approach to: (a) augment the limited supervision in the target language with additional supervision from a high-resource language (like English), and (b) train a single entity linking model for multiple languages, improving upon individually trained models for each language.
To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language.
Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts.
Cross-lingual Entity Linking (XEL), the problem of grounding mentions of entities in a foreign language text into an English knowledge base such as Wikipedia, has seen a lot of research in recent years, with a range of promising techniques.
However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages.
Citation information in scholarly data is an important source of insight into the reception of publications and the scholarly discourse.
We also propose a light-weight and simple solution based on the construction of indexes whose design is motivated by more complex transfer learning based neural approaches.