Search Results for author: Federico Bianchi

Found 27 papers, 15 papers with code

MilaNLP @ WASSA: Does BERT Feel Sad When You Cry?

no code implementations EACL (WASSA) 2021 Tommaso Fornaciari, Federico Bianchi, Debora Nozza, Dirk Hovy

The paper describes the MilaNLP team’s submission (Bocconi University, Milan) in the WASSA 2021 Shared Task on Empathy Detection and Emotion Classification.

Emotion Classification Multi-Task Learning

FEEL-IT: Emotion and Sentiment Classification for the Italian Language

1 code implementation EACL (WASSA) 2021 Federico Bianchi, Debora Nozza, Dirk Hovy

While sentiment analysis is a popular task to understand people’s reactions online, we often need more nuanced information: is the post negative because the user is angry or sad?

Sentiment Analysis

Beyond NDCG: behavioral testing of recommender systems with RecList

1 code implementation18 Nov 2021 Patrick John Chia, Jacopo Tagliabue, Federico Bianchi, Chloe He, Brian Ko

As with most Machine Learning systems, recommender systems are typically evaluated through performance metrics computed over held-out data points.

Recommendation Systems

Language Invariant Properties in Natural Language Processing

1 code implementation27 Sep 2021 Federico Bianchi, Debora Nozza, Dirk Hovy

We introduce language invariant properties: i. e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms.

Paraphrase Generation Translation

SWEAT: Scoring Polarization of Topics across Different Corpora

1 code implementation EMNLP 2021 Federico Bianchi, Marco Marelli, Paolo Nicoli, Matteo Palmonari

Understanding differences of viewpoints across corpora is a fundamental task for computational social sciences.

Contrastive Language-Image Pre-training for the Italian Language

1 code implementation19 Aug 2021 Federico Bianchi, Giuseppe Attanasio, Raphael Pisoni, Silvia Terragni, Gabriele Sarti, Sri Lakshmi

CLIP (Contrastive Language-Image Pre-training) is a very recent multi-modal model that jointly learns representations of images and texts.

Image Retrieval Multi-label zero-shot learning +1

Query2Prod2Vec: Grounded Word Embeddings for eCommerce

1 code implementation NAACL 2021 Federico Bianchi, Jacopo Tagliabue, Bingqing Yu

We present Query2Prod2Vec, a model that grounds lexical representations for product search in product embeddings: in our model, meaning is a mapping between words and a latent space of products in a digital shop.

Word Embeddings

Words with Consistent Diachronic Usage Patterns are Learned Earlier: A Computational Analysis Using Temporally Aligned Word Embeddings

1 code implementation Cognitive Science 2021 Giovanni Cassani, Federico Bianchi, Marco Marelli

In this study, we use temporally aligned word embeddings and a large diachronic corpus of English to quantify language change in a data-driven, scalable way, which is grounded in language use.

Diachronic Word Embeddings Word Embeddings

SIGIR 2021 E-Commerce Workshop Data Challenge

3 code implementations19 Apr 2021 Jacopo Tagliabue, Ciro Greco, Jean-Francis Roy, Bingqing Yu, Patrick John Chia, Federico Bianchi, Giovanni Cassani

The 2021 SIGIR workshop on eCommerce is hosting the Coveo Data Challenge for "In-session prediction for purchase intent and recommendations".

Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine Interaction

no code implementations NAACL 2021 Federico Bianchi, Ciro Greco, Jacopo Tagliabue

We investigate grounded language learning through real-world data, by modelling a teacher-learner dynamics through the natural interactions occurring between users and search engines; in particular, we explore the emergence of semantic generalization from unsupervised dense representations outside of synthetic environments.

Grounded language learning

Query2Prod2Vec Grounded Word Embeddings for eCommerce

1 code implementation2 Apr 2021 Federico Bianchi, Jacopo Tagliabue, Bingqing Yu

We present Query2Prod2Vec, a model that grounds lexical representations for product search in product embeddings: in our model, meaning is a mapping between words and a latent space of products in a digital shop.

Word Embeddings

Fantastic Embeddings and How to Align Them: Zero-Shot Inference in a Multi-Shop Scenario

no code implementations20 Jul 2020 Federico Bianchi, Jacopo Tagliabue, Bingqing Yu, Luca Bigon, Ciro Greco

This paper addresses the challenge of leveraging multiple embedding spaces for multi-shop personalization, proving that zero-shot inference is possible by transferring shopping intent from one website to another without manual intervention.

Knowledge Graph Embeddings and Explainable AI

no code implementations30 Apr 2020 Federico Bianchi, Gaetano Rossiello, Luca Costabello, Matteo Palmonari, Pasquale Minervini

Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces.

Knowledge Graph Embeddings

Cross-lingual Contextualized Topic Models with Zero-shot Learning

2 code implementations EACL 2021 Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, Elisabetta Fersini

They all cover the same content, but the linguistic differences make it impossible to use traditional, bag-of-word-based topic models.

Topic Models Transfer Learning +2

Compass-aligned Distributional Embeddings for Studying Semantic Differences across Corpora

1 code implementation13 Apr 2020 Federico Bianchi, Valerio Di Carlo, Paolo Nicoli, Matteo Palmonari

In this paper, we present a general framework to support cross-corpora language studies with word embeddings, where embeddings generated from different corpora can be compared to find correspondences and differences in meaning across the corpora.

Word Embeddings

Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence

3 code implementations ACL 2021 Federico Bianchi, Silvia Terragni, Dirk Hovy

Topic models extract groups of words from documents, whose interpretation as a topic hopefully allows for a better understanding of the data.

Sentence Embeddings Topic Models +1

"An Image is Worth a Thousand Features": Scalable Product Representations for In-Session Type-Ahead Personalization

no code implementations11 Mar 2020 Bingqing Yu, Jacopo Tagliabue, Ciro Greco, Federico Bianchi

We address the problem of personalizing query completion in a digital commerce setting, in which the bounce rate is typically high and recurring users are rare.

What the [MASK]? Making Sense of Language-Specific BERT Models

no code implementations5 Mar 2020 Debora Nozza, Federico Bianchi, Dirk Hovy

Driven by the potential of BERT models, the NLP community has started to investigate and generate an abundant number of BERT models that are trained on a particular language, and tested on a specific data domain and task.

Language Modelling

Training Temporal Word Embeddings with a Compass

1 code implementation5 Jun 2019 Valerio Di Carlo, Federico Bianchi, Matteo Palmonari

Temporal word embeddings have been proposed to support the analysis of word meaning shifts during time and to study the evolution of languages.

Diachronic Word Embeddings Word Embeddings

Experimental neural network enhanced quantum tomography

no code implementations11 Apr 2019 Adriano Macarone Palmieri, Egor Kovlakov, Federico Bianchi, Dmitry Yudin, Stanislav Straupe, Jacob Biamonte, Sergei Kulik

We compared the neural network state reconstruction protocol with a protocol treating SPAM errors by process tomography, as well as to a SPAM-agnostic protocol with idealized measurements.

Reasoning over RDF Knowledge Bases using Deep Learning

2 code implementations9 Nov 2018 Monireh Ebrahimi, Md. Kamruzzaman Sarker, Federico Bianchi, Ning Xie, Derek Doran, Pascal Hitzler

Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field.

Knowledge Graphs

Cannot find the paper you are looking for? You can Submit a new open access paper.