no code implementations • 29 Aug 2024 • Zhuan Shi, Jing Yan, Xiaoli Tang, Lingjuan Lyu, Boi Faltings
The increasing sophistication of text-to-image generative models has led to complex challenges in defining and enforcing copyright infringement criteria and protection.
no code implementations • 7 Aug 2024 • Beatriz Borges, Negar Foroutan, Deniz Bayazit, Anna Sotnikova, Syrielle Montariol, Tanya Nazaretzky, Mohammadreza Banaei, Alireza Sakhaeirad, Philippe Servant, Seyed Parsa Neshaei, Jibril Frej, Angelika Romanou, Gail Weiss, Sepideh Mamooler, Zeming Chen, Simin Fan, Silin Gao, Mete Ismayilzada, Debjit Paul, Alexandre Schöpfer, Andrej Janchevski, Anja Tiede, Clarence Linden, Emanuele Troiani, Francesco Salvi, Freya Behrens, Giacomo Orsi, Giovanni Piccioli, Hadrien Sevel, Louis Coulon, Manuela Pineros-Rodriguez, Marin Bonnassies, Pierre Hellich, Puck van Gerwen, Sankalp Gambhir, Solal Pirelli, Thomas Blanchard, Timothée Callens, Toni Abi Aoun, Yannick Calvino Alonso, Yuri Cho, Alberto Chiappa, Antonio Sclocchi, Étienne Bruno, Florian Hofhammer, Gabriel Pescia, Geovani Rizk, Leello Dadi, Lucas Stoffl, Manoel Horta Ribeiro, Matthieu Bovel, Yueyang Pan, Aleksandra Radenovic, Alexandre Alahi, Alexander Mathis, Anne-Florence Bitbol, Boi Faltings, Cécile Hébert, Devis Tuia, François Maréchal, George Candea, Giuseppe Carleo, Jean-Cédric Chappelier, Nicolas Flammarion, Jean-Marie Fürbringer, Jean-Philippe Pellet, Karl Aberer, Lenka Zdeborová, Marcel Salathé, Martin Jaggi, Martin Rajman, Mathias Payer, Matthieu Wyart, Michael Gastpar, Michele Ceriotti, Ola Svensson, Olivier Lévêque, Paolo Ienne, Rachid Guerraoui, Robert West, Sanidhya Kashyap, Valerio Piazza, Viesturs Simanis, Viktor Kuncak, Volkan Cevher, Philippe Schwaller, Sacha Friedli, Patrick Jermann, Tanja Kaser, Antoine Bosselut
We investigate the potential scale of this vulnerability by measuring the degree to which AI assistants can complete assessment questions in standard university-level STEM courses.
1 code implementation • 7 Aug 2024 • Luca Mouchel, Debjit Paul, Shaobo Cui, Robert West, Antoine Bosselut, Boi Faltings
Despite the remarkable performance of Large Language Models (LLMs) in natural language processing tasks, they still struggle with generating logically sound arguments, resulting in potential risks such as spreading misinformation.
no code implementations • 27 Jun 2024 • Shaobo Cui, Zhijing Jin, Bernhard Schölkopf, Boi Faltings
Understanding commonsense causality is a unique mark of intelligence for humans.
no code implementations • 21 Feb 2024 • Debjit Paul, Robert West, Antoine Bosselut, Boi Faltings
In this paper, we perform a causal mediation analysis on twelve LLMs to examine how intermediate reasoning steps generated by the LLM influence the final outcome and find that LLMs do not reliably use their intermediate reasoning steps when generating an answer.
no code implementations • 6 Jan 2024 • Shaobo Cui, Lazar Milikic, Yiyang Feng, Mete Ismayilzada, Debjit Paul, Antoine Bosselut, Boi Faltings
CESAR achieves a significant 69. 7% relative improvement over existing metrics, increasing from 47. 2% to 80. 1% in capturing the causal strength change brought by supporters and defeaters.
1 code implementation • 4 Apr 2023 • Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings
Language models (LMs) have recently shown remarkable performance on reasoning tasks by explicitly generating intermediate inferences, e. g., chain-of-thought prompting.
1 code implementation • 13 Oct 2022 • Martin Josifoski, Maxime Peyrard, Frano Rajic, Jiheng Wei, Debjit Paul, Valentin Hartmann, Barun Patra, Vishrav Chaudhary, Emre Kiciman, Boi Faltings, Robert West
Specifically, by analyzing the correlation between the likelihood and the utility of predictions across a diverse set of tasks, we provide empirical evidence supporting the proposed taxonomy and a set of principles to structure reasoning when choosing a decoding algorithm.
no code implementations • 23 May 2022 • Ljubomir Rokvic, Panayiotis Danassis, Sai Praneeth Karimireddy, Boi Faltings
In Federated Learning, it is crucial to handle low-quality, corrupted, or malicious data.
no code implementations • 13 May 2022 • Shuangqi Li, Diego Antognini, Boi Faltings
Explanation is important for text classification tasks.
no code implementations • 5 May 2022 • Diego Antognini, Shuyang Li, Boi Faltings, Julian McAuley
Prior studies have used pre-trained language models, or relied on small paired recipe data (e. g., a recipe paired with a similar one that satisfies a dietary constraint).
no code implementations • 5 Apr 2022 • Diego Antognini, Boi Faltings
As a result of revisiting critiquing from the perspective of multimodal generative models, recent work has proposed M&Ms-VAE, which achieves state-of-the-art performance in terms of recommendation, explanation, and critiquing.
1 code implementation • EMNLP 2021 • Fei Mi, Wanhao Zhou, Fengyu Cai, Lingjing Kong, Minlie Huang, Boi Faltings
In this paper, we devise a self-training approach to utilize the abundant unlabeled dialog data to further improve state-of-the-art pre-trained models in few-shot learning scenarios for ToD systems.
no code implementations • 28 Aug 2021 • Fei Mi, Tao Lin, Boi Faltings
In this paper, we consider scenarios that require learning new classes or data distributions quickly and incrementally over time, as it often occurs in real-world dynamic environments.
1 code implementation • 26 Aug 2021 • Fengyu Cai, Wanhao Zhou, Fei Mi, Boi Faltings
Utterance-level intent detection and token-level slot filling are two key tasks for natural language understanding (NLU) in task-oriented systems.
Ranked #3 on Slot Filling on MixSNIPS
no code implementations • 13 Jul 2021 • Diana Petrescu, Diego Antognini, Boi Faltings
Recommendations with personalized explanations have been shown to increase user trust and perceived quality and help users make better decisions.
1 code implementation • 10 Jun 2021 • Panayiotis Danassis, Aris Filos-Ratsikas, Haipeng Chen, Milind Tambe, Boi Faltings
Traditional competitive markets do not account for negative externalities; indirect costs that some participants impose on others, such as the cost of over-appropriating a common-pool resource (which diminishes future stock, and thus harvest, for everyone).
no code implementations • Findings (ACL) 2021 • Diego Antognini, Boi Faltings
One type of explanation is a rationale, i. e., a selection of input features such as relevant text snippets from which the model computes the outcome.
no code implementations • 9 May 2021 • Panayiotis Danassis, Florian Wiedemair, Boi Faltings
We present a multi-agent learning algorithm, ALMA-Learning, for efficient and fair allocations in large-scale systems.
no code implementations • 3 May 2021 • Diego Antognini, Boi Faltings
Experiments on four real-world datasets demonstrate that among state-of-the-art models, our system is the first to dominate or match the performance in terms of recommendation, explanation, and multi-step critiquing.
1 code implementation • 3 Feb 2021 • Panayiotis Danassis, Zeki Doruk Erden, Boi Faltings
Inspired by human behavior, we investigate the learning dynamics and emergence of temporal conventions, focusing on common-pool resources.
no code implementations • 16 Nov 2020 • Panayiotis Danassis, Aleksei Triastcyn, Boi Faltings
We introduce a practical and scalable algorithm (PALMA) for solving one of the fundamental problems of multi-agent systems -- finding matches and allocations -- in unboundedly large settings (e. g., resource allocation in urban environments, mobility-on-demand systems, etc.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Fei Mi, LiangWei Chen, Mengjie Zhao, Minlie Huang, Boi Faltings
Natural language generation (NLG) is an essential component of task-oriented dialog systems.
no code implementations • 19 Sep 2020 • Milena Filipovic, Blagoj Mitrevski, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
Finally, we validate that the Pareto Fronts obtained with the added objective dominate those produced by state-of-the-art models that are only optimized for accuracy on three real-world publicly available datasets.
no code implementations • 10 Sep 2020 • Blagoj Mitrevski, Milena Filipovic, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
We evaluate the benefits of Multi-objective Adamize on two multi-objective recommender systems and for three different objective combinations, both correlated or conflicting.
no code implementations • 9 Sep 2020 • Kirtan Padh, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
1 code implementation • 23 Jul 2020 • Fei Mi, Xiaoyu Lin, Boi Faltings
In this case, the recommender is updated continually and periodically with new data that arrives in each update cycle, and the updated model needs to provide recommendations for user activities before the next model update.
no code implementations • 22 May 2020 • Diego Antognini, Claudiu Musat, Boi Faltings
Using personalized explanations to support recommendations has been shown to increase trust and perceived quality.
no code implementations • 28 Apr 2020 • Fei Mi, Boi Faltings
We empirically show that MAN is well-suited for the incremental SR task, and it consistently outperforms state-of-the-art neural and nonparametric methods.
no code implementations • 2 Mar 2020 • Aleksei Triastcyn, Boi Faltings
This paper considers the problem of enhancing user privacy in common machine learning development tasks, such as data annotation and inspection, by substituting the real data with samples form a generative adversarial network.
1 code implementation • LREC 2020 • Diego Antognini, Boi Faltings
In this paper, we propose HotelRec, a very large-scale hotel recommendation dataset, based on TripAdvisor, containing 50 million reviews.
1 code implementation • LREC 2020 • Diego Antognini, Boi Faltings
In this paper, we propose GameWikiSum, a new domain-specific dataset for multi-document summarization, which is one hundred times larger than commonly used datasets, and in another domain than news.
no code implementations • 17 Dec 2019 • Panayiotis Danassis, Marija Sakota, Aris Filos-Ratsikas, Boi Faltings
We study the optimization of large-scale, real-time ridesharing systems and propose a modular design methodology, Component Algorithms for Ridesharing (CAR).
1 code implementation • 9 Dec 2019 • Nikola Milojkovic, Diego Antognini, Giancarlo Bergamin, Boi Faltings, Claudiu Musat
Recommender systems need to mirror the complexity of the environment they are applied in.
Ranked #1 on Recommendation Systems on MovieLens 20M (Recall@20 metric)
no code implementations • 22 Nov 2019 • Aleksei Triastcyn, Boi Faltings
We consider the problem of reinforcing federated learning with formal privacy guarantees.
no code implementations • 18 Oct 2019 • Aleksei Triastcyn, Boi Faltings
In this paper, we propose FedGP, a framework for privacy-preserving data release in the federated learning setting.
no code implementations • 25 Sep 2019 • Diego Antognini, Claudiu Musat, Boi Faltings
Past work used attention and rationale mechanisms to find words that predict the target variable of a document.
no code implementations • 25 Sep 2019 • Diego Antognini, Claudiu Musat, Boi Faltings
Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost.
no code implementations • WS 2019 • Diego Antognini, Boi Faltings
To overcome these limitations, we present a novel method, which makes use of two types of sentence embeddings: universal embeddings, which are trained on a large unrelated corpus, and domain-specific embeddings, which are learned during training.
no code implementations • 6 Sep 2019 • Boi Faltings, Martin Charles Golumbic
Partitioning a graph using graph separators, and particularly clique separators, are well-known techniques to decompose a graph into smaller units which can be treated independently.
no code implementations • 30 Aug 2019 • Adam Richardson, Aris Filos-Ratsikas, Boi Faltings
We consider a crowdsourcing data acquisition scenario, such as federated learning, where a Center collects data points from a set of rational Agents, with the aim of training a model.
no code implementations • 27 Aug 2019 • Naman Goel, Cyril van Schreven, Aris Filos-Ratsikas, Boi Faltings
For the first time, we show how to implement a trustless and transparent oracle in Ethereum.
no code implementations • 14 May 2019 • Fei Mi, Minlie Huang, Jiyong Zhang, Boi Faltings
Natural language generation (NLG) is an essential component of task-oriented dialogue systems.
no code implementations • 25 Feb 2019 • Panayiotis Danassis, Aris Filos-Ratsikas, Boi Faltings
We present a novel anytime heuristic (ALMA), inspired by the human principle of altruism, for solving the assignment problem.
1 code implementation • ICML 2020 • Aleksei Triastcyn, Boi Faltings
Traditional differential privacy is independent of the data distribution.
no code implementations • 31 Oct 2018 • Naman Goel, Boi Faltings
Recent studies have shown that the labels collected from crowdworkers can be discriminatory with respect to sensitive attributes such as gender and race.
1 code implementation • 10 Jun 2018 • Fei Mi, Boi Faltings
Therefore, recommendations need to be adaptive to such frequent changes.
Information Retrieval
no code implementations • 27 Apr 2018 • Jun Lu, Wei Ma, Boi Faltings
We explored $CompNet$, in which case we morph a well-trained neural network to a deeper one where network function can be preserved and the added layer is compact.
no code implementations • 16 Apr 2018 • Naman Goel, Boi Faltings
We propose a novel mechanism that assigns gold tasks to only a few workers and exploits transitivity to derive accuracy of the rest of the workers from their peers' accuracy.
no code implementations • 8 Mar 2018 • Aleksei Triastcyn, Boi Faltings
In this paper, we propose generating artificial data that retain statistical properties of real data as the means of providing privacy with respect to the original dataset.
BIG-bench Machine Learning Generative Adversarial Network +1
no code implementations • ICLR 2018 • Aleksei Triastcyn, Boi Faltings
In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data.
BIG-bench Machine Learning Generative Adversarial Network +1
1 code implementation • 22 Jun 2017 • Chaitanya K. Joshi, Fei Mi, Boi Faltings
The main goal of modeling human conversation is to create agents which can interact with people in both open-ended and goal-oriented scenarios.
no code implementations • 4 Feb 2014 • Thomas Leaute, Boi Faltings
As large-scale theft of data from corporate servers is becoming increasingly common, it becomes interesting to examine alternatives to the paradigm of centralizing sensitive data into large databases.
no code implementations • 4 Mar 2013 • Florent Garcin, Christos Dimitrakakis, Boi Faltings
The profusion of online news articles makes it difficult to find interesting articles, a problem that can be assuaged by using a recommender system to bring the most relevant news stories to readers.