1 code implementation • 14 Feb 2023 • Soumyajit Gupta, Sooyong Lee, Maria De-Arteaga, Matthew Lease
We propose framing toxicity detection as multi-task learning (MTL), allowing a model to specialize on the relationships that are relevant to each demographic group while also leveraging shared properties across groups.
no code implementations • 15 Apr 2022 • Venelin Kovatchev, Soumyajit Gupta, Anubrata Das, Matthew Lease
In this work, we first introduce a differentiable measure that enables direct optimization of group fairness (specifically, balancing accuracy across groups) in model training.
no code implementations • 28 Oct 2021 • Soumyajit Gupta, Gurpreet Singh, Raghu Bollapragada, Matthew Lease
Multi-objective optimization (MOO) problems require balancing competing objectives, often under constraints.
no code implementations • 29 Sep 2021 • Soumyajit Gupta, Gurpreet Singh, Clint N. Dawson
For Big Data applications, computing a rank-$r$ Singular Value Decomposition (SVD) is restrictive due to the main memory requirements.
no code implementations • 29 Sep 2021 • Soumyajit Gupta, Gurpreet Singh, Matthew Lease
The Stage-1 neural network efficiently extracts the \textit{weak} Pareto front, using Fritz-John Conditions (FJC) as the discriminator, with no assumptions of convexity on the objectives or constraints.
no code implementations • 28 Apr 2021 • Gurpreet Singh, Soumyajit Gupta
However, a number of applications such as community detection, clustering, or bottleneck identification in large scale graph data-sets rely upon identifying the lowest singular values and the singular corresponding vectors.
no code implementations • 10 Feb 2021 • Gurpreet Singh, Soumyajit Gupta, Clint Dawson
We show for the first time that a two-layer autoencoder (SCA), with $2FK$ parameters ($F$ features, $K$ endmembers), achieves error metrics that are scales apart ($10^{-5})$ from previously reported values $(10^{-2})$.
no code implementations • 27 Jan 2021 • Gurpreet Singh, Soumyajit Gupta, Matthew Lease, Clint Dawson
The first stage (neural network) efficiently extracts a weak Pareto front, using Fritz-John conditions as the discriminator, with no assumptions of convexity on the objectives or constraints.
no code implementations • 27 Oct 2020 • Gurpreet Singh, Soumyajit Gupta, Matthew Lease, Clint Dawson
Although these methods are claimed to be applicable to scientific computations due to associated tail-energy error bounds, the approximation errors in the singular vectors and values are high when the aforementioned assumption does not hold.
no code implementations • 13 Sep 2020 • Gurpreet Singh, Soumyajit Gupta, Matthew Lease
However, such an approach is often restricted to a strict class of functions, deviation from which results in sub-optimal solution to the original problem.
no code implementations • 22 Aug 2020 • Gurpreet Singh, Soumyajit Gupta, Clint N. Dawson
We demonstrate through carefully chosen numerical experiments that the basis collapse issue leads to the design of massively redundant networks.
no code implementations • 5 Mar 2020 • Gurpreet Singh, Soumyajit Gupta, Matt Lease, Clint N. Dawson
Partial Differential Equations are infinite dimensional encoded representations of physical processes.
no code implementations • 2 Dec 2016 • Jilin Wu, Soumyajit Gupta, Chandrajit Bajaj
Feature selection is a process of choosing a subset of relevant features so that the quality of prediction models can be improved.