Search Results for author: Gemma Boleda

Found 43 papers, 10 papers with code

Challenges in including extra-linguistic context in pre-trained language models

no code implementations insights (ACL) 2022 Ionut Sorodoc, Laura Aina, Gemma Boleda

To successfully account for language, computational models need to take into account both the linguistic context (the content of the utterances) and the extra-linguistic context (for instance, the participants in a dialogue).

Language Modelling Transfer Learning

The Impact of Familiarity on Naming Variation: A Study on Object Naming in Mandarin Chinese

no code implementations16 Nov 2023 Yunke He, Xixian Liao, Jialing Liang, Gemma Boleda

Different speakers often produce different names for the same object or entity (e. g., "woman" vs. "tourist" for a female tourist).

Object

Communication breakdown: On the low mutual intelligibility between human and neural captioning

1 code implementation20 Oct 2022 Roberto Dessì, Eleonora Gualdoni, Francesca Franzon, Gemma Boleda, Marco Baroni

We compare the 0-shot performance of a neural caption-based image retriever when given as input either human-produced captions or captions generated by a neural captioner.

Retrieval

Humans Meet Models on Object Naming: A New Dataset and Analysis

1 code implementation COLING 2020 Carina Silberer, Sina Zarrie{\ss}, Matthijs Westera, Gemma Boleda

We also find that standard evaluations underestimate the actual effectiveness of the naming model: on the single-label names of the original dataset (Visual Genome), it obtains −27{\%} accuracy points than on MN v2, that includes all valid object names.

Object valid

Probing for Referential Information in Language Models

no code implementations ACL 2020 Ionut-Teodor Sorodoc, Kristina Gulordava, Gemma Boleda

Language models keep track of complex information about the preceding context {--} including, e. g., syntactic relations in a sentence.

Sentence

Object Naming in Language and Vision: A Survey and a New Dataset

no code implementations LREC 2020 Carina Silberer, Sina Zarrie{\ss}, Gemma Boleda

We highlight the challenges involved and provide a preliminary analysis of the ManyNames data, showing that there is a high level of agreement in naming, on average.

Object

Deep daxes: Mutual exclusivity arises through both learning biases and pragmatic strategies in neural networks

no code implementations8 Apr 2020 Kristina Gulordava, Thomas Brochhagen, Gemma Boleda

We find that constraints in both learning and selection can foster mutual exclusivity, as long as they put words in competition for lexical meaning.

Putting words in context: LSTM language models and lexical ambiguity

1 code implementation ACL 2019 Laura Aina, Kristina Gulordava, Gemma Boleda

In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers.

Language Modelling Word Embeddings

Don't Blame Distributional Semantics if it can't do Entailment

no code implementations WS 2019 Matthijs Westera, Gemma Boleda

Our proposal sheds light on the role of distributional semantics in a broader theory of language and cognition, its relationship to formal semantics, and its place in computational models.

Semantic Similarity Semantic Textual Similarity

What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue

1 code implementation NAACL 2019 Laura Aina, Carina Silberer, Matthijs Westera, Ionut-Teodor Sorodoc, Gemma Boleda

In this paper we analyze the behavior of two recently proposed entity-centric models in a referential task, Entity Linking in Multi-party Dialogue (SemEval 2018 Task 4).

Entity Linking

Distributional Semantics and Linguistic Theory

no code implementations6 May 2019 Gemma Boleda

Distributional semantics provides multi-dimensional, graded, empirically induced word representations that successfully capture many aspects of meaning in natural languages, as shown in a large body of work in computational linguistics; yet, its impact in theoretical linguistics has so far been limited.

Short-Term Meaning Shift: A Distributional Exploration

1 code implementation NAACL 2019 Marco Del Tredici, Raquel Fernández, Gemma Boleda

We present the first exploration of meaning shift over short periods of time in online communities using distributional representations.

Instantiation

no code implementations5 Aug 2018 Abhijeet Gupta, Gemma Boleda, Sebastian Pado

Our paper closes this gap by investigating and modeling the lexical relation of instantiation, which holds between an entity-denoting and a category-denoting expression (Marie Curie -- scientist or Mumbai -- city).

Word Embeddings

AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library

1 code implementation SEMEVAL 2018 Laura Aina, Carina Silberer, Ionut-Teodor Sorodoc, Matthijs Westera, Gemma Boleda

This paper describes our winning contribution to SemEval 2018 Task 4: Character Identification on Multiparty Dialogues.

Talking about the world with a distributed model

no code implementations WS 2017 Gemma Boleda

We use language to talk about the world, and so reference is a crucial property of language.

Text Generation

Instances and concepts in distributional space

no code implementations EACL 2017 Gemma Boleda, Abhijeet Gupta, Sebastian Pad{\'o}

Instances ({``}Mozart{''}) are ontologically distinct from concepts or classes ({``}composer{''}).

Living a discrete life in a continuous world: Reference with distributed representations

no code implementations6 Feb 2017 Gemma Boleda, Sebastian Padó, Nghia The Pham, Marco Baroni

Reference is a crucial property of language that allows us to connect linguistic expressions to the world.

"Show me the cup": Reference with Continuous Representations

no code implementations28 Jun 2016 Gemma Boleda, Sebastian Padó, Marco Baroni

One of the most basic functions of language is to refer to objects in a shared scene.

Zipf's law for word frequencies: word forms versus lemmas in long texts

no code implementations31 Jul 2014 Alvaro Corral, Gemma Boleda, Ramon Ferrer-i-Cancho

In all cases Zipf's law is fulfilled, in the sense that a power-law distribution of word or lemma frequencies is valid for several orders of magnitude.

LEMMA valid

Cannot find the paper you are looking for? You can Submit a new open access paper.