no code implementations • 6 May 2025 • Cléa Chataigner, Rebecca Ma, Prakhar Ganesh, Afaf Taïk, Elliot Creager, Golnoosh Farnadi
Small changes in how a prompt is worded can lead to meaningful differences in the behavior of large language models (LLMs), raising concerns about the stability and reliability of their evaluations.
1 code implementation • 31 Mar 2025 • Florian Carichon, Romain Rampa, Golnoosh Farnadi
By introducing a set of Jensen-Shannon Divergence metrics for novelty, we leverage this dataset to analyze textual divergences when recipes from one community are modified by another with a different cultural background.
no code implementations • 24 Jan 2025 • Prakhar Ganesh, Afaf Taik, Golnoosh Farnadi
Algorithmic modelling relies on limited information in data to extrapolate outcomes for unseen scenarios, often embedding an element of arbitrariness in its decisions.
no code implementations • 21 Jan 2025 • Khaoula Chehbouni, Martine De Cock, Gilles Caporossi, Afaf Taik, Reihaneh Rabbany, Golnoosh Farnadi
The increased screen time and isolation caused by the COVID-19 pandemic have led to a significant surge in cases of online grooming, which is the use of strategies by predators to lure children into sexual exploitation.
no code implementations • 18 Dec 2024 • Armin Moradi, Nicola Neophytou, Florian Carichon, Golnoosh Farnadi
Using the country of origin as a proxy for cultural identity, we link this demographic attribute to popularity bias by refining the embedding space learning process.
no code implementations • 17 Nov 2024 • Prakhar Ganesh, Usman Gohar, Lu Cheng, Golnoosh Farnadi
In this work, we show significant variance in fairness achieved by several algorithms and the influence of the learning pipeline on fairness scores.
no code implementations • 12 Nov 2024 • Khaoula Chehbouni, Jonathan Colaço Carr, Yash More, Jackie CK Cheung, Golnoosh Farnadi
In an effort to mitigate the harms of large language models (LLMs), learning from human feedback (LHF) has been used to steer LLMs towards outputs that are intended to be both less harmful and more helpful.
no code implementations • 23 Oct 2024 • Cléa Chataigner, Afaf Taïk, Golnoosh Farnadi
In this study, we explore the phenomenon of hallucinations across multiple languages in freeform text generation, focusing on what we call multilingual hallucination gaps.
no code implementations • 22 Oct 2024 • Rohan Sukumaran, Aarash Feizi, Adriana Romero-Sorian, Golnoosh Farnadi
To the best of our knowledge, we are the first to introduce a fairness based finetuning through LoRA.
no code implementations • 20 Oct 2024 • Shahrad Mohammadzadeh, Juan David Guerra, Marco Bonizzato, Reihaneh Rabbany, Golnoosh Farnadi
As large language models (LLMs) are increasingly deployed across various industries, concerns regarding their reliability, particularly due to hallucinations - outputs that are factually inaccurate or irrelevant to user input - have grown.
no code implementations • 26 Sep 2024 • Saber Malekmohammadi, Golnoosh Farnadi
As models grow in size, full fine-tuning all of their parameters becomes increasingly impractical.
no code implementations • 15 Aug 2024 • Ahmed Imtiaz Humayun, Ibtihel Amara, Candice Schumann, Golnoosh Farnadi, Negar Rostamzadeh, Mohammad Havaei
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training.
1 code implementation • 16 Jul 2024 • Niloofar Mireshghallah, Maria Antoniak, Yash More, Yejin Choi, Golnoosh Farnadi
Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users' AI literacy and facilitate privacy research for large language models (LLMs).
1 code implementation • 2 Jul 2024 • Yash More, Prakhar Ganesh, Golnoosh Farnadi
Language models are prone to memorizing parts of their training data which makes them vulnerable to extraction attacks.
no code implementations • 3 Jun 2024 • Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh
The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind.
no code implementations • 29 May 2024 • Saber Malekmohammadi, Afaf Taik, Golnoosh Farnadi
To address this gap, we propose an algorithm for differentially private clustered FL, which is robust to the DP noise in the system and identifies the underlying clients' clusters correctly.
no code implementations • 28 May 2024 • Mina Arzaghi, Florian Carichon, Golnoosh Farnadi
Large Language Models (LLMs) are increasingly integrated into critical decision-making processes, such as loan approvals and visa applications, where inherent biases can lead to discriminatory outcomes.
no code implementations • 28 May 2024 • Prakhar Ganesh, Ihsan Ibrahim Daldaban, Ignacio Cofone, Golnoosh Farnadi
Model multiplicity, the phenomenon where multiple models achieve similar performance despite different underlying learned functions, introduces arbitrariness in model selection.
no code implementations • 27 May 2024 • Armin Moradi, Nicola Neophytou, Golnoosh Farnadi
In this work, we identify these biases in recommendations for artists from underrepresented cultural groups in prototype-based matrix factorization methods.
no code implementations • 22 Apr 2024 • Jesse Thibodeau, Hadi Nekoei, Afaf Taïk, Janarthanan Rajendran, Golnoosh Farnadi
We find that, upon deploying a learned tax and redistribution policy, social welfare improves on that of the fairness-agnostic baseline, and approaches that of the analytically optimal fairness-aware baseline for the multi-armed and contextual bandit settings, and surpassing it by 13. 19% in the full RL setting.
1 code implementation • 20 Mar 2024 • Khaoula Chehbouni, Megha Roshan, Emmanuel Ma, Futian Andrew Wei, Afaf Taik, Jackie CK Cheung, Golnoosh Farnadi
Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain.
3 code implementations • 31 Oct 2023 • Meraj Hashemizadeh, Juan Ramirez, Rohan Sukumaran, Golnoosh Farnadi, Simon Lacoste-Julien, Jose Gallego-Posada
Model pruning is a popular approach to enable the deployment of large deep learning models on edge devices with restricted computational or storage capacities.
no code implementations • 30 Oct 2023 • Ahmad-Reza Ehyaei, Golnoosh Farnadi, Samira Samadi
Despite the essential need for comprehensive considerations in responsible AI, factors like robustness, fairness, and causality are often studied in isolation.
no code implementations • 5 Sep 2023 • Armin Moradi, Golnoosh Farnadi
The growing popularity of language models has sparked interest in conversational recommender systems (CRS) within both industry and research circles.
no code implementations • 28 Aug 2023 • Rebecca Salganik, Fernando Diaz, Golnoosh Farnadi
As online music platforms grow, music recommender systems play a vital role in helping users navigate and discover content within their vast musical databases.
no code implementations • 17 Aug 2023 • Ahmad-Reza Ehyaei, Kiarash Mohammadi, Amir-Hossein Karimi, Samira Samadi, Golnoosh Farnadi
In this paper, we propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models in heterogeneous data spaces, particularly when dealing with discrete sensitive attributes.
no code implementations • 11 Jun 2023 • Maryam Molamohammadi, Afaf Taik, Nicolas Le Roux, Golnoosh Farnadi
The growing utilization of machine learning (ML) in decision-making processes raises questions about its benefits to society.
no code implementations • 6 Mar 2023 • Jia Ao Sun, Sikha Pentyala, Martine De Cock, Golnoosh Farnadi
Users worldwide access massive amounts of curated data in the form of rankings on a daily basis.
1 code implementation • 8 Sep 2022 • Rebecca Salganik, Fernando Diaz, Golnoosh Farnadi
We evaluate two popular GNN methods: Graph Convolutional Network (GCN), which trains on the entire graph, and GraphSAGE, which uses probabilistic random walks to create subgraphs for mini-batch training, and assess the effects of sub-sampling on individual fairness.
no code implementations • 1 Jun 2022 • Kiarash Mohammadi, Aishwarya Sivaraman, Golnoosh Farnadi
Empirical evaluation on real-world datasets indicates that FETA is not only able to guarantee fairness on-the-fly at prediction time but also is able to train accurate models exhibiting a much higher degree of individual fairness.
no code implementations • 23 May 2022 • Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi
Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity.
1 code implementation • 8 Feb 2022 • Sikha Pentyala, David Melanson, Martine De Cock, Golnoosh Farnadi
Machine learning (ML) has become prominent in applications that directly affect people's quality of life, including in healthcare, justice, and finance.
no code implementations • AAAI Workshop CLeaR 2022 • Kiarash Mohammadi, Aishwarya Sivaraman, Golnoosh Farnadi
There is an increasing interest in adopting high-capacity machine learning models such as deep neural networks to semi-automate human decisions.
1 code implementation • NeurIPS 2020 • Aishwarya Sivaraman, Golnoosh Farnadi, Todd Millstein, Guy Van Den Broeck
Additionally, we propose a technique to use monotonicity as an inductive bias for deep learning.
no code implementations • 5 Jan 2020 • Golnoosh Farnadi, Lise Getoor, Marie-Francine Moens, Martine De Cock
In this paper, we propose a mechanism to infer a variety of user characteristics, such as, age, gender and personality traits, which can then be compiled into a user profile.
no code implementations • 23 Sep 2019 • Behrouz Babaki, Golnoosh Farnadi, Gilles Pesant
In this paper we show how identifying and exploiting these identical subproblems can simplify solving them and leads to a compact representation of the solution.
1 code implementation • 10 Jun 2019 • YooJung Choi, Golnoosh Farnadi, Behrouz Babaki, Guy Van Den Broeck
As machine learning is increasingly used to make real-world decisions, recent research efforts aim to define and ensure fairness in algorithmic decision making.
no code implementations • 30 Aug 2018 • Sisi Wang, Wing-Sea Poon, Golnoosh Farnadi, Caleb Horst, Kebra Thompson, Michael Nickels, Rafael Dowsley, Anderson C. A. Nascimento, Martine De Cock
User profiling from user generated content (UGC) is a common practice that supports the business models of many social media companies.
no code implementations • 3 Jul 2018 • Varun Embar, Dhanya Sridhar, Golnoosh Farnadi, Lise Getoor
We introduce a greedy search-based algorithm and a novel optimization method that trade-off scalability and approximations to the structure learning problem in varying ways.