1 code implementation • 26 Jan 2024 • Shibbir Ahmed, Hongyang Gao, Hridesh Rajan
In this work, we propose a novel technique that uses rules derived from neural network computations to infer data preconditions for a DNN model to determine the trustworthiness of its predictions.
1 code implementation • 10 Sep 2023 • Ali Ghanbari, Deepak-George Thomas, Muhammad Arbab Arshad, Hridesh Rajan
Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems.
no code implementations • 26 Jul 2023 • Samantha Syeda Khairunnesa, Shibbir Ahmed, Sayem Mohammad Imtiaz, Hridesh Rajan, Gary T. Leavens
The software engineering community could employ existing contract mining approaches to mine these contracts to promote an increased understanding of ML APIs.
2 code implementations • 15 Jun 2023 • Giang Nguyen, Sumon Biswas, Hridesh Rajan
In order to demonstrate the effectiveness of our approach, we evaluated our approach on four fairness problems and 16 different ML models, and our results show a significant improvement over the baseline and existing bias mitigation techniques.
no code implementations • 9 Dec 2022 • Sayem Mohammad Imtiaz, Fraol Batole, Astha Singh, Rangeet Pan, Breno Dantas Cruz, Hridesh Rajan
Can we take a recurrent neural network (RNN) trained to translate between languages and augment it to support a new natural language without retraining the model from scratch?
1 code implementation • 8 Dec 2022 • Usman Gohar, Sumon Biswas, Hridesh Rajan
Furthermore, studies have shown that hyperparameters influence the fairness of ML models.
1 code implementation • 8 Dec 2022 • Sumon Biswas, Hridesh Rajan
In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models.
1 code implementation • 7 Dec 2021 • Mohammad Wardat, Breno Dantas Cruz, Wei Le, Hridesh Rajan
Also, it can provide actionable insights for fix whereas DeepLocalize can only report faults that lead to numerical errors during training.
2 code implementations • 6 Dec 2021 • Giang Nguyen, Md Johir Islam, Rangeet Pan, Hridesh Rajan
Recent work on AutoML, more precisely neural architecture search (NAS), embodied by tools like Auto-Keras aims to solve this problem by essentially viewing it as a search problem where the starting point is a default CNN model, and mutation of this CNN model allows exploration of the space of CNN models to find a CNN model that will work best for the problem.
no code implementations • 11 Oct 2021 • Rangeet Pan, Hridesh Rajan
Also, building a model by reusing or replacing modules can be done with a 2. 3% and 0. 5% average loss of accuracy.
no code implementations • ICLR 2022 • Tianxiang Gao, Hailiang Liu, Jia Liu, Hridesh Rajan, Hongyang Gao
Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rules of many commonly used neural network architectures.
1 code implementation • 2 Jun 2021 • Sumon Biswas, Hridesh Rajan
What are the fairness impacts of the preprocessing stages in machine learning pipeline?
2 code implementations • 21 May 2020 • Sumon Biswas, Hridesh Rajan
Then, we have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
no code implementations • 27 Jun 2019 • Md Johirul Islam, Hoan Anh Nguyen, Rangeet Pan, Hridesh Rajan
Last and somewhat surprisingly, a tug of war between providing higher levels of abstractions and the need to understand the behavior of the trained model is prevalent.
Software Engineering
no code implementations • 3 Jun 2019 • Md Johirul Islam, Giang Nguyen, Rangeet Pan, Hridesh Rajan
The key findings of our study include: data bug and logic bug are the most severe bug types in deep learning software appearing more than 48% of the times, major root causes of these bugs are Incorrect Model Parameter (IPS) and Structural Inefficiency (SI) showing up more than 43% of the times.
no code implementations • 30 May 2019 • Rangeet Pan, Md Johirul Islam, Shibbir Ahmed, Hridesh Rajan
Based on the distance among original classes, we create mapping among original classes and adversarial classes that helps to reduce the randomness of a model to a significant amount in an adversarial setting.