Browse > Natural Language Processing > Natural Language Inference > Cross-Lingual Natural Language Inference

Cross-Lingual Natural Language Inference

4 papers with code · Natural Language Processing

State-of-the-art leaderboards

Greatest papers with code

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

11 Oct 2018google-research/bert

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.

COMMON SENSE REASONING CROSS-LINGUAL NATURAL LANGUAGE INFERENCE NAMED ENTITY RECOGNITION QUESTION ANSWERING

Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond

26 Dec 2018facebookresearch/LASER

We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different language families and written in 28 different scripts. Finally, we introduce a new test set of aligned sentences in 122 languages based on the Tatoeba corpus, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low-resource languages.

CROSS-LINGUAL BITEXT MINING CROSS-LINGUAL DOCUMENT CLASSIFICATION CROSS-LINGUAL NATURAL LANGUAGE INFERENCE CROSS-LINGUAL TRANSFER DOCUMENT CLASSIFICATION JOINT MULTILINGUAL SENTENCE REPRESENTATIONS PARALLEL CORPUS MINING

Supervised Learning of Universal Sentence Representations from Natural Language Inference Data

EMNLP 2017 facebookresearch/InferSent

Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful.

CROSS-LINGUAL NATURAL LANGUAGE INFERENCE SEMANTIC TEXTUAL SIMILARITY TRANSFER LEARNING WORD EMBEDDINGS

XNLI: Evaluating Cross-lingual Sentence Representations

EMNLP 2018 facebookresearch/XLM

State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language.

CROSS-LINGUAL NATURAL LANGUAGE INFERENCE MACHINE TRANSLATION