Cross-Lingual Question Answering

11 papers with code • 3 benchmarks • 6 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

On the Cross-lingual Transferability of Monolingual Representations

deepmind/xquad ACL 2020

This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions.

Scaling Instruction-Finetuned Language Models

google-research/flan 20 Oct 2022

We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation).

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

ByT5: Towards a token-free future with pre-trained byte-to-byte models

google-research/byt5 28 May 2021

Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units.

Rethinking embedding coupling in pre-trained language models

PaddlePaddle/PaddleNLP ICLR 2021

We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models.

mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models

studio-ousia/luke ACL 2022

We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks.

Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering

microsoft/unilm EMNLP 2021

Coupled with the availability of large scale datasets, deep learning architectures have enabled rapid progress on the Question Answering task.

Cross-Lingual Question Answering over Knowledge Base as Reading Comprehension

luciusssss/xkbqa-as-mrc 26 Feb 2023

We convert KB subgraphs into passages to narrow the gap between KB schemas and questions, which enables our model to benefit from recent advances in multilingual pre-trained language models (MPLMs) and cross-lingual machine reading comprehension (xMRC).

PAXQA: Generating Cross-lingual Question Answering Examples at Training Scale

manestay/paxqa 24 Apr 2023

This work proposes a synthetic data generation method for cross-lingual QA which leverages indirect supervision from existing parallel corpora.

Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge Distillation

ccasimiro88/self-distillation-gxlt-qa 29 Sep 2023

Our approach seeks to enhance cross-lingual QA transfer using a high-performing multilingual model trained on a large-scale dataset, complemented by a few thousand aligned QA examples across languages.