Search Results for author: Garima Lalwani

Found 4 papers, 1 papers with code

Using Optimal Transport as Alignment Objective for fine-tuning Multilingual Contextualized Embeddings

no code implementations Findings (EMNLP) 2021 Sawsan Alqahtani, Garima Lalwani, Yi Zhang, Salvatore Romeo, Saab Mansour

Recent studies have proposed different methods to improve multilingual word representations in contextualized settings including techniques that align between source and target embedding spaces.

Cross-Lingual Transfer Word Alignment

Context Analysis for Pre-trained Masked Language Models

no code implementations Findings of the Association for Computational Linguistics 2020 Yi-An Lai, Garima Lalwani, Yi Zhang

Pre-trained language models that learn contextualized word representations from a large un-annotated corpus have become a standard component for many state-of-the-art NLP systems.

An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models

1 code implementation14 Jul 2020 Lifu Tu, Garima Lalwani, Spandana Gella, He He

Recent work has shown that pre-trained language models such as BERT improve robustness to spurious correlations in the dataset.

Multi-Task Learning Natural Language Inference +1

CASA-NLU: Context-Aware Self-Attentive Natural Language Understanding for Task-Oriented Chatbots

no code implementations IJCNLP 2019 Arshit Gupta, Peng Zhang, Garima Lalwani, Mona Diab

In this work, we propose a context-aware self-attentive NLU (CASA-NLU) model that uses multiple signals, such as previous intents, slots, dialog acts and utterances over a variable context window, in addition to the current user utterance.

Dialogue Management intent-classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.