no code implementations • 29 Mar 2025 • Anastasiia Fadeeva, Vincent Coriou, Diego Antognini, Claudiu Musat, Andrii Maksai
Tablets and styluses are increasingly popular for taking notes.
no code implementations • 27 May 2024 • Runqian Wang, Soumya Ghosh, David Cox, Diego Antognini, Aude Oliva, Rogerio Feris, Leonid Karlinsky
Our approach relies on synthetic data to transfer LoRA modules.
1 code implementation • 17 Apr 2024 • Yue Zhou, Yada Zhu, Diego Antognini, Yoon Kim, Yang Zhang
This paper studies the relationship between the surface form of a mathematical problem and its solvability by large language models.
no code implementations • 30 Nov 2023 • Lokesh Mishra, Cesar Berrospi, Kasper Dinkla, Diego Antognini, Francesco Fusco, Benedikt Bothur, Maksym Lysak, Nikolaos Livathinos, Ahmed Nassar, Panagiotis Vagenas, Lucas Morin, Christoph Auer, Michele Dolfi, Peter Staar
We present Deep Search DocQA.
no code implementations • 25 May 2023 • Francesco Fusco, Diego Antognini
Extracting dense representations for terms and phrases is a task of great importance for knowledge discovery platforms targeting highly-technical fields.
no code implementations • 24 Oct 2022 • Francesco Fusco, Peter Staar, Diego Antognini
Developing term extractors that are able to generalize across very diverse and potentially highly technical domains is challenging, as annotations for domains requiring in-depth expertise are scarce and expensive to obtain.
no code implementations • 19 Oct 2022 • Thomas Frick, Diego Antognini, Mattia Rigotti, Ioana Giurgiu, Benjamin Grewe, Cristiano Malossi
Unfortunately, annotation costs are incredibly high as our proprietary civil engineering dataset must be annotated by highly trained engineers.
no code implementations • 15 May 2022 • Diego Antognini
This dissertation focuses on two fundamental challenges of addressing this need.
no code implementations • 13 May 2022 • Shuangqi Li, Diego Antognini, Boi Faltings
Explanation is important for text classification tasks.
no code implementations • 5 May 2022 • Diego Antognini, Shuyang Li, Boi Faltings, Julian McAuley
Prior studies have used pre-trained language models, or relied on small paired recipe data (e. g., a recipe paired with a similar one that satisfies a dietary constraint).
no code implementations • 5 Apr 2022 • Diego Antognini, Boi Faltings
As a result of revisiting critiquing from the perspective of multimodal generative models, recent work has proposed M&Ms-VAE, which achieves state-of-the-art performance in terms of recommendation, explanation, and critiquing.
1 code implementation • 9 Feb 2022 • Francesco Fusco, Damian Pascual, Peter Staar, Diego Antognini
Large pre-trained language models based on transformer architecture have drastically changed the natural language processing (NLP) landscape.
no code implementations • 13 Jul 2021 • Diana Petrescu, Diego Antognini, Boi Faltings
Recommendations with personalized explanations have been shown to increase user trust and perceived quality and help users make better decisions.
no code implementations • Findings (ACL) 2021 • Diego Antognini, Boi Faltings
One type of explanation is a rationale, i. e., a selection of input features such as relevant text snippets from which the model computes the outcome.
no code implementations • 3 May 2021 • Diego Antognini, Boi Faltings
Experiments on four real-world datasets demonstrate that among state-of-the-art models, our system is the first to dominate or match the performance in terms of recommendation, explanation, and multi-step critiquing.
no code implementations • 26 Apr 2021 • Martin Milenkoski, Diego Antognini, Claudiu Musat
The intuition is that user-item interactions in a source domain can augment the recommendation quality in a target domain.
no code implementations • 7 Dec 2020 • Saibo Geng, Diego Antognini
Multi-document summaritazion is the process of taking multiple texts as input and producing a short summary text based on the content of input texts.
no code implementations • 19 Sep 2020 • Milena Filipovic, Blagoj Mitrevski, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
Finally, we validate that the Pareto Fronts obtained with the added objective dominate those produced by state-of-the-art models that are only optimized for accuracy on three real-world publicly available datasets.
no code implementations • 10 Sep 2020 • Blagoj Mitrevski, Milena Filipovic, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
We evaluate the benefits of Multi-objective Adamize on two multi-objective recommender systems and for three different objective combinations, both correlated or conflicting.
no code implementations • 9 Sep 2020 • Kirtan Padh, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
no code implementations • 22 May 2020 • Diego Antognini, Claudiu Musat, Boi Faltings
Using personalized explanations to support recommendations has been shown to increase trust and perceived quality.
1 code implementation • LREC 2020 • Diego Antognini, Boi Faltings
In this paper, we propose GameWikiSum, a new domain-specific dataset for multi-document summarization, which is one hundred times larger than commonly used datasets, and in another domain than news.
1 code implementation • LREC 2020 • Diego Antognini, Boi Faltings
In this paper, we propose HotelRec, a very large-scale hotel recommendation dataset, based on TripAdvisor, containing 50 million reviews.
1 code implementation • 9 Dec 2019 • Nikola Milojkovic, Diego Antognini, Giancarlo Bergamin, Boi Faltings, Claudiu Musat
Recommender systems need to mirror the complexity of the environment they are applied in.
Ranked #1 on
Recommendation Systems
on MovieLens 20M
(Recall@20 metric)
no code implementations • 25 Sep 2019 • Diego Antognini, Claudiu Musat, Boi Faltings
Past work used attention and rationale mechanisms to find words that predict the target variable of a document.
no code implementations • 25 Sep 2019 • Diego Antognini, Claudiu Musat, Boi Faltings
Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost.
no code implementations • WS 2019 • Diego Antognini, Boi Faltings
To overcome these limitations, we present a novel method, which makes use of two types of sentence embeddings: universal embeddings, which are trained on a large unrelated corpus, and domain-specific embeddings, which are learned during training.
no code implementations • 26 Sep 2017 • Athanasios Giannakopoulos, Diego Antognini, Claudiu Musat, Andreea Hossmann, Michael Baeriswyl
Publicly available review corpora contain a plethora of opinionated aspect terms and cover a larger domain spectrum.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+2