Search Results for author: Avijit Ghosh

Found 12 papers, 2 papers with code

Coordinated Disclosure for AI: Beyond Security Vulnerabilities

no code implementations10 Feb 2024 Sven Cattell, Avijit Ghosh

Harm reporting in the field of Artificial Intelligence (AI) currently operates on an ad hoc basis, lacking a structured process for disclosing or addressing algorithmic flaws.

Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

no code implementations15 Jul 2023 Organizers Of QueerInAI, Nathan Dennler, Anaelia Ovalle, Ashwin Singh, Luca Soldaini, Arjun Subramonian, Huy Tu, William Agnew, Avijit Ghosh, Kyra Yee, Irene Font Peradejordi, Zeerak Talat, Mayra Russo, Jess de Jesus de Pinho Pinhal

However, these auditing processes have been criticized for their failure to integrate the knowledge of marginalized communities and consider the power dynamics between auditors and the communities.

When Fair Classification Meets Noisy Protected Attributes

1 code implementation6 Jul 2023 Avijit Ghosh, Pablo Kvitca, Christo Wilson

Our study provides insights into the practical implications of using fair classification algorithms in scenarios where protected attributes are noisy or partially available.

Attribute Classification +1

Can There be Art Without an Artist?

no code implementations16 Sep 2022 Avijit Ghosh, Genoveva Fossas

Generative AI based art has proliferated in the past year, with increasingly impressive use cases from generating fake human faces to the creation of systems that can generate thousands of artistic images from text prompts - some of these images have even been "good" enough to win accolades from qualified judges.

Subverting Fair Image Search with Generative Adversarial Perturbations

no code implementations5 May 2022 Avijit Ghosh, Matthew Jagielski, Christo Wilson

In this work we explore the intersection fairness and robustness in the context of ranking: when a ranking model has been calibrated to achieve some definition of fairness, is it possible for an external adversary to make the ranking model behave unfairly without having access to the model or training data?

Fairness Image Retrieval +1

FairCanary: Rapid Continuous Explainable Fairness

no code implementations13 Jun 2021 Avijit Ghosh, Aalok Shanbhag, Christo Wilson

We incorporate QDD into a continuous model monitoring system, called FairCanary, that reuses existing explanations computed for each individual prediction to quickly compute explanations for the QDD bias metrics.

Fairness Feature Importance

When Fair Ranking Meets Uncertain Inference

1 code implementation5 May 2021 Avijit Ghosh, Ritam Dutt, Christo Wilson

Existing fair ranking systems, especially those designed to be demographically fair, assume that accurate demographic information about individuals is available to the ranking algorithm.

Fairness

Unified Shapley Framework to Explain Prediction Drift

no code implementations15 Feb 2021 Aalok Shanbhag, Avijit Ghosh, Josh Rubin

Predictions are the currency of a machine learning model, and to understand the model's behavior over segments of a dataset, or over time, is an important problem in machine learning research and practice.

BIG-bench Machine Learning

Characterizing Intersectional Group Fairness with Worst-Case Comparisons

no code implementations5 Jan 2021 Avijit Ghosh, Lea Genuit, Mary Reagan

Machine Learning or Artificial Intelligence algorithms have gained considerable scrutiny in recent times owing to their propensity towards imitating and amplifying existing prejudices in society.

Fairness

Public Sphere 2.0: Targeted Commenting in Online News Media

no code implementations21 Feb 2019 Ankan Mullick, Sayan Ghosh, Ritam Dutt, Avijit Ghosh, Abhijnan Chakraborty

Because the readers lack the time to go over an entire article, most of the comments are relevant to only particular sections of an article.

Cannot find the paper you are looking for? You can Submit a new open access paper.