Search Results for author: Guillem Collell

Found 10 papers, 3 papers with code

Digital Forgetting in Large Language Models: A Survey of Unlearning Methods

no code implementations2 Apr 2024 Alberto Blanco-Justicia, Najeeb Jebreel, Benet Manzanares, David Sánchez, Josep Domingo-Ferrer, Guillem Collell, Kuan Eeik Tan

The objective of digital forgetting is, given a model with undesirable knowledge or behavior, obtain a new model where the detected issues are no longer present.

Machine Unlearning

Decoding Language Spatial Relations to 2D Spatial Arrangements

1 code implementation Findings of the Association for Computational Linguistics 2020 Gorjan Radevski, Guillem Collell, Marie-Francine Moens, Tinne Tuytelaars

We address the problem of multimodal spatial understanding by decoding a set of language-expressed spatial relations to a set of 2D spatial arrangements in a multi-object and multi-relationship setting.

Do Neural Network Cross-Modal Mappings Really Bridge Modalities?

no code implementations ACL 2018 Guillem Collell, Marie-Francine Moens

Feed-forward networks are widely used in cross-modal applications to bridge modalities by mapping distributed vectors of one modality to the other, or to a shared space.

Retrieval

Learning Representations Specialized in Spatial Knowledge: Leveraging Language and Vision

1 code implementation TACL 2018 Guillem Collell, Marie-Francine Moens

Here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous 2D spatial arrangements of objects given object-relationship-object instances (e. g., {``}cat under chair{''}) and a simple neural network model that learns the task from annotated images.

Dependency Parsing Object +4

Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates

1 code implementation18 Nov 2017 Guillem Collell, Luc van Gool, Marie-Francine Moens

In contrast with prior work that restricts spatial templates to explicit spatial prepositions (e. g., "glass on table"), here we extend this concept to implicit spatial language, i. e., those relationships (generally actions) for which the spatial arrangement of the objects is only implicitly implied (e. g., "man riding horse").

Common Sense Reasoning Question Answering +1

Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings

no code implementations25 Mar 2017 Guillem Collell, Teddy Zhang, Marie-Francine Moens

Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision.

Is an Image Worth More than a Thousand Words? On the Fine-Grain Semantic Differences between Visual and Linguistic Representations

no code implementations COLING 2016 Guillem Collell, Marie-Francine Moens

Human concept representations are often grounded with visual information, yet some aspects of meaning cannot be visually represented or are better described with language.

Reviving Threshold-Moving: a Simple Plug-in Bagging Ensemble for Binary and Multiclass Imbalanced Data

no code implementations28 Jun 2016 Guillem Collell, Drazen Prelec, Kaustubh Patil

An alternative is to use a so-called threshold-moving method that a posteriori changes the decision threshold of a model to counteract the imbalance, thus has a potential to adapt to the performance measure of interest.

Cannot find the paper you are looking for? You can Submit a new open access paper.