Search Results for author: Vinit Ravishankar

Found 22 papers, 3 papers with code

From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers

no code implementations EMNLP 2020 Anne Lauscher, Vinit Ravishankar, Ivan Vuli{\'c}, Goran Glava{\v{s}}

Massively multilingual transformers (MMTs) pretrained via language modeling (e. g., mBERT, XLM-R) have become a default paradigm for zero-shot language transfer in NLP, offering unmatched transfer performance.

Cross-Lingual Word Embeddings Dependency Parsing +5

Multilingual ELMo and the Effects of Corpus Sampling

no code implementations NoDaLiDa 2021 Vinit Ravishankar, Andrey Kutuzov, Lilja Øvrelid, Erik Velldal

Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages.

Word Order Does Matter and Shuffled Language Models Know It

no code implementations ACL 2022 Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, Anders Søgaard

Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.

Position Segmentation +1

A Closer Look at Parameter Contributions When Training Neural Language and Translation Models

no code implementations COLING 2022 Raúl Vázquez, Hande Celikkanat, Vinit Ravishankar, Mathias Creutz, Jörg Tiedemann

We analyze the learning dynamics of neural language and translation models using Loss Change Allocation (LCA), an indicator that enables a fine-grained analysis of parameter updates when optimizing for the loss function.

Causal Language Modeling Language Modelling +3

Word Order and World Knowledge

no code implementations1 Mar 2024 Qinghua Zhao, Vinit Ravishankar, Nicolas Garneau, Anders Søgaard

Word order is an important concept in natural language, and in this work, we study how word order affects the induction of world knowledge from raw text using language models.

World Knowledge

Word Order Does Matter (And Shuffled Language Models Know It)

no code implementations21 Mar 2022 Vinit Ravishankar, Mostafa Abdou, Artur Kulmizev, Anders Søgaard

Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.

Position Segmentation +1

The Impact of Positional Encodings on Multilingual Compression

no code implementations EMNLP 2021 Vinit Ravishankar, Anders Søgaard

In order to preserve word-order information in a non-autoregressive setting, transformer architectures tend to include positional knowledge, by (for instance) adding positional encodings to token embeddings.

Inductive Bias

Attention Can Reflect Syntactic Structure (If You Let It)

no code implementations EACL 2021 Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, Joakim Nivre

Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism.

The Sensitivity of Language Models and Humans to Winograd Schema Perturbations

2 code implementations ACL 2020 Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov, Desmond Elliott, Anders Søgaard

Large-scale pretrained language models are the major driving force behind recent improvements in performance on the Winograd Schema Challenge, a widely employed test of common sense reasoning ability.

Common Sense Reasoning

From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers

no code implementations1 May 2020 Anne Lauscher, Vinit Ravishankar, Ivan Vulić, Goran Glavaš

Massively multilingual transformers pretrained with language modeling objectives (e. g., mBERT, XLM-R) have become a de facto default transfer paradigm for zero-shot cross-lingual transfer in NLP, offering unmatched transfer performance.

Cross-Lingual Word Embeddings Dependency Parsing +6

Do Neural Language Models Show Preferences for Syntactic Formalisms?

no code implementations ACL 2020 Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, Joakim Nivre

Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces.

A Systematic Comparison of Architectures for Document-Level Sentiment Classification

1 code implementation19 Feb 2020 Jeremy Barnes, Vinit Ravishankar, Lilja Øvrelid, Erik Velldal

Documents are composed of smaller pieces - paragraphs, sentences, and tokens - that have complex relationships between one another.

Classification Document Classification +5

Multilingual Probing of Deep Pre-Trained Contextual Encoders

no code implementations WS 2019 Vinit Ravishankar, Memduh G{\"o}k{\i}rmak, Lilja {\O}vrelid, Erik Velldal

Encoders that generate representations based on context have, in recent years, benefited from adaptations that allow for pre-training on large text corpora.

Sentence

Probing Multilingual Sentence Representations With X-Probe

no code implementations WS 2019 Vinit Ravishankar, Lilja Øvrelid, Erik Velldal

This paper extends the task of probing sentence representations for linguistic insight in a multilingual domain.

Natural Language Inference Sentence

Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination

no code implementations SEMEVAL 2018 Artur Kulmizev, Mostafa Abdou, Vinit Ravishankar, Malvina Nissim

We participated to the SemEval-2018 shared task on capturing discriminative attributes (Task 10) with a simple system that ranked 8th amongst the 26 teams that took part in the evaluation.

A prototype dependency treebank for Breton

no code implementations JEPTALNRECITAL 2018 Francis M. Tyers, Vinit Ravishankar

This paper describes the development of the first syntactically-annotated corpus of Breton.

Cannot find the paper you are looking for? You can Submit a new open access paper.