Search Results for author: Neha Verma

Found 9 papers, 4 papers with code

FeTaQA: Free-form Table Question Answering

1 code implementation1 Apr 2021 Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kryściński, Nick Schoelkopf, Riley Kong, Xiangru Tang, Murori Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev

Existing table question answering datasets contain abundant factual questions that primarily evaluate the query and schema comprehension capability of a system, but they fail to include questions that require complex reasoning and integration of information due to the constraint of the associated short-form answers.

Question Answering Retrieval +2

IsoVec: Controlling the Relative Isomorphism of Word Embedding Spaces

1 code implementation11 Oct 2022 Kelly Marchisio, Neha Verma, Kevin Duh, Philipp Koehn

The ability to extract high-quality translation dictionaries from monolingual word embedding spaces depends critically on the geometric similarity of the spaces -- their degree of "isomorphism."

Bilingual Lexicon Induction Translation

Exploring Representational Disparities Between Multilingual and Bilingual Translation Models

no code implementations23 May 2023 Neha Verma, Kenton Murray, Kevin Duh

Multilingual machine translation has proven immensely useful for both parameter efficiency and overall performance across many language pairs via complete multilingual parameter sharing.

Machine Translation Translation

Merging Text Transformer Models from Different Initializations

1 code implementation1 Mar 2024 Neha Verma, Maha Elbayad

Recent work on one-shot permutation-based model merging has shown impressive low- or zero-barrier mode connectivity between models from completely different initializations.

Language Modelling Masked Language Modeling

Strategies for Adapting Multilingual Pre-training for Domain-Specific Machine Translation

no code implementations AMTA 2022 Neha Verma, Kenton Murray, Kevin Duh

Therefore, in this work, we propose two major fine-tuning strategies: our language-first approach first learns the translation language pair via general bitext, followed by the domain via in-domain bitext, and our domain-first approach first learns the domain via multilingual in-domain bitext, followed by the language pair via language pair-specific in-domain bitext.

Domain Adaptation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.