Search Results for author: Lisa Bylinina

Found 4 papers, 3 papers with code

Too Much Information: Keeping Training Simple for BabyLMs

no code implementations3 Nov 2023 Lukas Edman, Lisa Bylinina

This paper details the work of the University of Groningen for the BabyLM Challenge.

Language Modelling

Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations

1 code implementation4 Jun 2023 Aleksey Tikhonov, Lisa Bylinina, Denis Paperno

Multimodal embeddings aim to enrich the semantic information in neural representations of language compared to text-only models.

Visual Grounding Word Embeddings

Connecting degree and polarity: An artificial language learning study

1 code implementation13 Sep 2021 Lisa Bylinina, Alexey Tikhonov, Ekaterina Garmash

We investigate a new linguistic generalization in pre-trained language models (taking BERT (Devlin et al., 2019) as a case study).

Language Modelling Sentence

Transformers in the loop: Polarity in neural models of language

1 code implementation ACL 2022 Lisa Bylinina, Alexey Tikhonov

Establishing this allows us to more adequately evaluate the performance of language models and also to use language models to discover new insights into natural language grammar beyond existing linguistic theories.

Cannot find the paper you are looking for? You can Submit a new open access paper.