no code implementations • 20 Mar 2025 • Vishnu Asutosh Dasu, Md Rafi Ur Rashid, Vipul Gupta, Saeid Tizpaz-Niari, Gang Tan
This paper introduces Attention Pruning, a fairness-aware surrogate simulated annealing approach to prune attention heads in LLMs that disproportionately contribute to bias while minimally impacting overall model utility.
no code implementations • 20 Jan 2025 • Verya Monjezi, Ashutosh Trivedi, Vladik Kreinovich, Saeid Tizpaz-Niari
Previous research on algorithmic fairness has focused on improving average-case fairness.
1 code implementation • 5 Jul 2024 • Vishnu Asutosh Dasu, Ashish Kumar, Saeid Tizpaz-Niari, Gang Tan
We show that our design of randomized algorithms is effective and efficient in improving fairness (up to 69%) with minimal or no model performance degradation.
1 code implementation • 1 Jul 2024 • Normen Yu, Luciana Carreon, Gang Tan, Saeid Tizpaz-Niari
To aid data-driven software developers and end-users, we present FairLay-ML, a debugging tool to test and explain the fairness implications of data-driven solutions.
no code implementations • 29 Apr 2024 • Salvador Robles Herrera, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi, Saeid Tizpaz-Niari
However, the precision depends on the ML training algorithm, dataset, and protected attributes.
no code implementations • 10 Apr 2024 • Saeid Tizpaz-Niari, Sriram Sankaranarayanan
On the set of larger machine learning training algorithms and deep neural network inference, we show the feasibility and usefulness of EVT models to accurately predict WCCTs, their expected return periods, and their likelihood.
no code implementations • 20 Nov 2023 • Dananjay Srinivas, Rohan Das, Saeid Tizpaz-Niari, Ashutosh Trivedi, Maria Leonor Pacheco
Due to the ever-increasing complexity of income tax laws in the United States, the number of US taxpayers filing their taxes using tax preparation software (henceforth, tax software) continues to increase.
1 code implementation • 11 Jul 2023 • Normen Yu, Gang Tan, Saeid Tizpaz-Niari
This thesis explores open-sourced machine learning (ML) model explanation tools to understand whether these tools can allow a layman to visualize, understand, and suggest intuitive remedies to unfairness in ML-based decision-support systems.
no code implementations • 9 Apr 2023 • Verya Monjezi, Ashutosh Trivedi, Gang Tan, Saeid Tizpaz-Niari
Guided by the quantitative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects.
2 code implementations • 13 Feb 2022 • Saeid Tizpaz-Niari, Ashish Kumar, Gang Tan, Ashutosh Trivedi
This paper investigates the parameter space of machine learning (ML) algorithms in aggravating or mitigating fairness bugs.
no code implementations • 3 Jun 2020 • Saeid Tizpaz-Niari, Pavol Cerný, Ashutosh Trivedi
On a set of micro-benchmarks, we show that our approach outperforms state-of-the-art fuzzers in finding inputs to characterize the differential performance.
no code implementations • 23 Jul 2019 • Saeid Tizpaz-Niari, Pavol Cerny, Sriram Sankaranarayanan, Ashutosh Trivedi
As demonstrated in our experiments, both of these tasks are feasible in practice --- making the approach a significant improvement over the state-of-the-art side channel detectors and quantifiers.
no code implementations • 21 Jun 2019 • Saeid Tizpaz-Niari, Pavol Cerny, Ashutosh Trivedi
In contrast to the existing mitigation approaches, we show that in the functional-observation threat model, SCHMIT is scalable and able to maximize confidentiality under the performance overhead bound.
no code implementations • 30 Aug 2018 • Saeid Tizpaz-Niari, Pavol Cerny, Ashutosh Trivedi
On the realistic programs, we show the scalability of FUCHSIA in analyzing functional side channels in Java programs with thousands of methods.
no code implementations • 11 Nov 2017 • Saeid Tizpaz-Niari, Pavol Cerny, Bor-Yuh Evan Chang, Ashutosh Trivedi
We propose a data-driven technique based on discriminant regression tree (DRT) learning problem where the goal is to discriminate among different classes of inputs.
no code implementations • 23 Feb 2017 • Saeid Tizpaz-Niari, Pavol Cerny, Bor-Yuh Evan Chang, Sriram Sankaranarayanan, Ashutosh Trivedi
What properties about the internals of a program explain the possible differences in its overall running time for different inputs?