Search Results for author: Golnoosh Farnadi

Found 39 papers, 9 papers with code

Say It Another Way: A Framework for User-Grounded Paraphrasing

no code implementations6 May 2025 Cléa Chataigner, Rebecca Ma, Prakhar Ganesh, Afaf Taïk, Elliot Creager, Golnoosh Farnadi

Small changes in how a prompt is worded can lead to meaningful differences in the behavior of large language models (LLMs), raising concerns about the stability and reliability of their evaluations.

Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes

1 code implementation31 Mar 2025 Florian Carichon, Romain Rampa, Golnoosh Farnadi

By introducing a set of Jensen-Shannon Divergence metrics for novelty, we leverage this dataset to analyze textual divergences when recipes from one community are modified by another with a different cultural background.

Diversity Recommendation Systems +1

The Curious Case of Arbitrariness in Machine Learning

no code implementations24 Jan 2025 Prakhar Ganesh, Afaf Taik, Golnoosh Farnadi

Algorithmic modelling relies on limited information in data to extrapolate outcomes for unseen scenarios, often embedding an element of arbitrariness in its decisions.

Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy

no code implementations21 Jan 2025 Khaoula Chehbouni, Martine De Cock, Gilles Caporossi, Afaf Taik, Reihaneh Rabbany, Golnoosh Farnadi

The increased screen time and isolation caused by the COVID-19 pandemic have led to a significant surge in cases of online grooming, which is the use of strategies by predators to lure children into sexual exploitation.

Federated Learning Privacy Preserving

Embedding Cultural Diversity in Prototype-based Recommender Systems

no code implementations18 Dec 2024 Armin Moradi, Nicola Neophytou, Florian Carichon, Golnoosh Farnadi

Using the country of origin as a proxy for cultural identity, we link this demographic attribute to popularity bias by refining the embedding space learning process.

Attribute Diversity +2

Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML

no code implementations17 Nov 2024 Prakhar Ganesh, Usman Gohar, Lu Cheng, Golnoosh Farnadi

In this work, we show significant variance in fairness achieved by several algorithms and the influence of the learning pipeline on fairness scores.

Benchmarking Fairness +2

Beyond the Safety Bundle: Auditing the Helpful and Harmless Dataset

no code implementations12 Nov 2024 Khaoula Chehbouni, Jonathan Colaço Carr, Yash More, Jackie CK Cheung, Golnoosh Farnadi

In an effort to mitigate the harms of large language models (LLMs), learning from human feedback (LHF) has been used to steer LLMs towards outputs that are intended to be both less harmful and more helpful.

Multilingual Hallucination Gaps in Large Language Models

no code implementations23 Oct 2024 Cléa Chataigner, Afaf Taïk, Golnoosh Farnadi

In this study, we explore the phenomenon of hallucinations across multiple languages in freeform text generation, focusing on what we call multilingual hallucination gaps.

Hallucination Text Generation

Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training

no code implementations20 Oct 2024 Shahrad Mohammadzadeh, Juan David Guerra, Marco Bonizzato, Reihaneh Rabbany, Golnoosh Farnadi

As large language models (LLMs) are increasingly deployed across various industries, concerns regarding their reliability, particularly due to hallucinations - outputs that are factually inaccurate or irrelevant to user input - have grown.

Hallucination Language Modeling +2

On the Implicit Relation Between Low-Rank Adaptation and Differential Privacy

no code implementations26 Sep 2024 Saber Malekmohammadi, Golnoosh Farnadi

As models grow in size, full fine-tuning all of their parameters becomes increasingly impractical.

LEMMA Relation

Understanding the Local Geometry of Generative Model Manifolds

no code implementations15 Aug 2024 Ahmed Imtiaz Humayun, Ibtihel Amara, Candice Schumann, Golnoosh Farnadi, Negar Rostamzadeh, Mohammad Havaei

Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training.

Memorization model

Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild

1 code implementation16 Jul 2024 Niloofar Mireshghallah, Maria Antoniak, Yash More, Yejin Choi, Golnoosh Farnadi

Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users' AI literacy and facilitate privacy research for large language models (LLMs).

Chatbot

Towards More Realistic Extraction Attacks: An Adversarial Perspective

1 code implementation2 Jul 2024 Yash More, Prakhar Ganesh, Golnoosh Farnadi

Language models are prone to memorizing parts of their training data which makes them vulnerable to extraction attacks.

Memorization

Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities

no code implementations3 Jun 2024 Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh

The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind.

Position

Differentially Private Clustered Federated Learning

no code implementations29 May 2024 Saber Malekmohammadi, Afaf Taik, Golnoosh Farnadi

To address this gap, we propose an algorithm for differentially private clustered FL, which is robust to the DP noise in the system and identifies the underlying clients' clusters correctly.

Clustering Fairness +1

Understanding Intrinsic Socioeconomic Biases in Large Language Models

no code implementations28 May 2024 Mina Arzaghi, Florian Carichon, Golnoosh Farnadi

Large Language Models (LLMs) are increasingly integrated into critical decision-making processes, such as loan approvals and visa applications, where inherent biases can lead to discriminatory outcomes.

Decision Making Fairness

The Cost of Arbitrariness for Individuals: Examining the Legal and Technical Challenges of Model Multiplicity

no code implementations28 May 2024 Prakhar Ganesh, Ihsan Ibrahim Daldaban, Ignacio Cofone, Golnoosh Farnadi

Model multiplicity, the phenomenon where multiple models achieve similar performance despite different underlying learned functions, introduces arbitrariness in model selection.

Model Selection

Advancing Cultural Inclusivity: Optimizing Embedding Spaces for Balanced Music Recommendations

no code implementations27 May 2024 Armin Moradi, Nicola Neophytou, Golnoosh Farnadi

In this work, we identify these biases in recommendations for artists from underrepresented cultural groups in prototype-based matrix factorization methods.

Fairness Music Recommendation +1

Fairness Incentives in Response to Unfair Dynamic Pricing

no code implementations22 Apr 2024 Jesse Thibodeau, Hadi Nekoei, Afaf Taïk, Janarthanan Rajendran, Golnoosh Farnadi

We find that, upon deploying a learned tax and redistribution policy, social welfare improves on that of the fairness-agnostic baseline, and approaches that of the analytically optimal fairness-aware baseline for the multi-armed and contextual bandit settings, and surpassing it by 13. 19% in the full RL setting.

Fairness Reinforcement Learning (RL)

From Representational Harms to Quality-of-Service Harms: A Case Study on Llama 2 Safety Safeguards

1 code implementation20 Mar 2024 Khaoula Chehbouni, Megha Roshan, Emmanuel Ma, Futian Andrew Wei, Afaf Taik, Jackie CK Cheung, Golnoosh Farnadi

Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain.

Safe Reinforcement Learning

Balancing Act: Constraining Disparate Impact in Sparse Models

3 code implementations31 Oct 2023 Meraj Hashemizadeh, Juan Ramirez, Rohan Sukumaran, Golnoosh Farnadi, Simon Lacoste-Julien, Jose Gallego-Posada

Model pruning is a popular approach to enable the deployment of large deep learning models on edge devices with restricted computational or storage capacities.

Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness

no code implementations30 Oct 2023 Ahmad-Reza Ehyaei, Golnoosh Farnadi, Samira Samadi

Despite the essential need for comprehensive considerations in responsible AI, factors like robustness, fairness, and causality are often studied in isolation.

Adversarial Robustness counterfactual +2

Tidying Up the Conversational Recommender Systems' Biases

no code implementations5 Sep 2023 Armin Moradi, Golnoosh Farnadi

The growing popularity of language models has sparked interest in conversational recommender systems (CRS) within both industry and research circles.

Natural Language Understanding Recommendation Systems

Fairness Through Domain Awareness: Mitigating Popularity Bias For Music Discovery

no code implementations28 Aug 2023 Rebecca Salganik, Fernando Diaz, Golnoosh Farnadi

As online music platforms grow, music recommender systems play a vital role in helping users navigate and discover content within their vast musical databases.

Fairness Graph Neural Network +1

Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces

no code implementations17 Aug 2023 Ahmad-Reza Ehyaei, Kiarash Mohammadi, Amir-Hossein Karimi, Samira Samadi, Golnoosh Farnadi

In this paper, we propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models in heterogeneous data spaces, particularly when dealing with discrete sensitive attributes.

Adversarial Robustness Fairness +2

Unraveling the Interconnected Axes of Heterogeneity in Machine Learning for Democratic and Inclusive Advancements

no code implementations11 Jun 2023 Maryam Molamohammadi, Afaf Taik, Nicolas Le Roux, Golnoosh Farnadi

The growing utilization of machine learning (ML) in decision-making processes raises questions about its benefits to society.

Decision Making

Privacy-Preserving Fair Item Ranking

no code implementations6 Mar 2023 Jia Ao Sun, Sikha Pentyala, Martine De Cock, Golnoosh Farnadi

Users worldwide access massive amounts of curated data in the form of rankings on a daily basis.

Fairness Privacy Preserving

Analyzing the Effect of Sampling in GNNs on Individual Fairness

1 code implementation8 Sep 2022 Rebecca Salganik, Fernando Diaz, Golnoosh Farnadi

We evaluate two popular GNN methods: Graph Convolutional Network (GCN), which trains on the entire graph, and GraphSAGE, which uses probabilistic random walks to create subgraphs for mini-batch training, and assess the effects of sub-sampling on individual fairness.

Fairness Graph Neural Network +1

FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks

no code implementations1 Jun 2022 Kiarash Mohammadi, Aishwarya Sivaraman, Golnoosh Farnadi

Empirical evaluation on real-world datasets indicates that FETA is not only able to guarantee fairness on-the-fly at prediction time but also is able to train accurate models exhibiting a much higher degree of individual fairness.

Decision Making Fairness +1

PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning

no code implementations23 May 2022 Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi

Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity.

Attribute Decision Making +3

PrivFair: a Library for Privacy-Preserving Fairness Auditing

1 code implementation8 Feb 2022 Sikha Pentyala, David Melanson, Martine De Cock, Golnoosh Farnadi

Machine learning (ML) has become prominent in applications that directly affect people's quality of life, including in healthcare, justice, and finance.

Fairness Privacy Preserving

Post-processing Counterexample-guided Fairness Guarantees in Neural Networks

no code implementations AAAI Workshop CLeaR 2022 Kiarash Mohammadi, Aishwarya Sivaraman, Golnoosh Farnadi

There is an increasing interest in adopting high-capacity machine learning models such as deep neural networks to semi-automate human decisions.

Fairness

User Profiling Using Hinge-loss Markov Random Fields

no code implementations5 Jan 2020 Golnoosh Farnadi, Lise Getoor, Marie-Francine Moens, Martine De Cock

In this paper, we propose a mechanism to infer a variety of user characteristics, such as, age, gender and personality traits, which can then be compiled into a user profile.

Relational Reasoning

Compiling Stochastic Constraint Programs to And-Or Decision Diagrams

no code implementations23 Sep 2019 Behrouz Babaki, Golnoosh Farnadi, Gilles Pesant

In this paper we show how identifying and exploiting these identical subproblems can simplify solving them and leads to a compact representation of the solution.

Decision Making

Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns

1 code implementation10 Jun 2019 YooJung Choi, Golnoosh Farnadi, Behrouz Babaki, Guy Van Den Broeck

As machine learning is increasingly used to make real-world decisions, recent research efforts aim to define and ensure fairness in algorithmic decision making.

Decision Making Fairness

VirtualIdentity: Privacy-Preserving User Profiling

no code implementations30 Aug 2018 Sisi Wang, Wing-Sea Poon, Golnoosh Farnadi, Caleb Horst, Kebra Thompson, Michael Nickels, Rafael Dowsley, Anderson C. A. Nascimento, Martine De Cock

User profiling from user generated content (UGC) is a common practice that supports the business models of many social media companies.

Privacy Preserving

Scalable Structure Learning for Probabilistic Soft Logic

no code implementations3 Jul 2018 Varun Embar, Dhanya Sridhar, Golnoosh Farnadi, Lise Getoor

We introduce a greedy search-based algorithm and a novel optimization method that trade-off scalability and approximations to the structure learning problem in varying ways.

Cannot find the paper you are looking for? You can Submit a new open access paper.