no code implementations • 29 Jan 2024 • Michael Feffer, Anusha Sinha, Zachary C. Lipton, Hoda Heidari
In response to rising concerns surrounding the safety, security, and trustworthiness of Generative AI (GenAI) models, practitioners and regulators alike have pointed to AI red-teaming as a key component of their strategies for identifying and mitigating these risks.
no code implementations • 10 Oct 2023 • Michael Feffer, Nikolas Martelaro, Hoda Heidari
Prior work has established the importance of integrating AI ethics topics into computer and data sciences curricula.
no code implementations • 31 Jul 2023 • Martin Hirzel, Michael Feffer
There have been many papers with algorithms for improving fairness of machine-learning classifiers for tabular data.
no code implementations • 27 May 2023 • Michael Feffer, Hoda Heidari, Zachary C. Lipton
With Artificial Intelligence systems increasingly applied in consequential domains, researchers have begun to ask how these systems ought to act in ethically charged situations where even humans lack consensus.
no code implementations • 11 Oct 2022 • Michael Feffer, Martin Hirzel, Samuel C. Hoffman, Kiran Kate, Parikshit Ram, Avraham Shinnar
Bias mitigators can improve algorithmic fairness in machine learning models, but their effect on fairness is often not stable across data splits.
no code implementations • 1 Feb 2022 • Michael Feffer, Martin Hirzel, Samuel C. Hoffman, Kiran Kate, Parikshit Ram, Avraham Shinnar
A popular approach to train more stable models is ensemble learning.