no code implementations • 23 May 2024 • Yizhuo Chen, Chun-Fu Chen, Hsiang Hsu, Shaohan Hu, Marco Pistoia, Tarek Abdelzaher

The growing richness of large-scale datasets has been crucial in driving the rapid advancement and wide adoption of machine learning technologies.

no code implementations • 6 Feb 2024 • Wei-Cheng Huang, Chun-Fu Chen, Hsiang Hsu

We illustrate that a simplified prompt-based method can achieve results comparable to previous state-of-the-art (SOTA) methods equipped with a prompt pool, using much less learnable parameters and lower inference cost.

no code implementations • 1 Feb 2024 • Hsiang Hsu, Guihong Li, Shaohan Hu, Chun-Fu, Chen

Predictive multiplicity refers to the phenomenon in which classification tasks may admit multiple competing models that achieve almost-equally-optimal performance, yet generate conflicting outputs for individual samples.

2 code implementations • 1 Feb 2024 • Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu

This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models.

no code implementations • 22 Dec 2023 • Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu

The rapid growth of machine learning has spurred legislative initiatives such as ``the Right to be Forgotten,'' allowing users to request data removal.

1 code implementation • 15 Jun 2023 • Carol Xuan Long, Hsiang Hsu, Wael Alghamdi, Flavio P. Calmon

Machine learning tasks may admit multiple competing models that achieve similar performance yet produce conflicting outputs for individual samples -- a phenomenon known as predictive multiplicity.

1 code implementation • 28 Feb 2023 • Bogdan Kulynych, Hsiang Hsu, Carmela Troncoso, Flavio P. Calmon

We demonstrate that such randomization incurs predictive multiplicity: for a given input example, the output predicted by equally-private models depends on the randomness used in training.

1 code implementation • 17 Sep 2022 • Marguerite B. Basta, Sarfaraz Hussein, Hsiang Hsu, Flavio P. Calmon

Then, the identified tumors are passed to a second CNN for recurrence risk prediction.

1 code implementation • 15 Jun 2022 • Wael Alghamdi, Hsiang Hsu, Haewon Jeong, Hao Wang, P. Winston Michalak, Shahab Asoodeh, Flavio P. Calmon

We consider the problem of producing fair probabilistic classifiers for multi-class classification tasks.

1 code implementation • 2 Jun 2022 • Hsiang Hsu, Flavio du Pin Calmon

Predictive multiplicity occurs when classification models with statistically indistinguishable performances assign conflicting predictions to individual samples.

1 code implementation • 8 Feb 2022 • Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen

Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.

1 code implementation • ICLR 2021 • Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio P. Calmon, Taesup Moon

Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability.

no code implementations • 12 Feb 2020 • Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon

To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i. e., classifiers trained and deployed separately on each group) with group-blind classifiers (i. e., classifiers which do not use a sensitive attribute).

no code implementations • 17 Oct 2019 • Hsiang Hsu, Shahab Asoodeh, Flavio du Pin Calmon

The core of this mechanism relies on a data-driven estimate of the trimmed information density for which we propose a novel estimator, named the trimmed information density estimator (TIDE).

2 code implementations • 21 Feb 2019 • Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

Correspondence analysis (CA) is a multivariate statistical tool used to visualize and interpret data dependencies.

no code implementations • 29 Nov 2018 • Hsiang Hsu, Flavio P. Calmon, José Cândido Silveira Santos Filho, Andre P. Calmon, Salman Salamatian

We analyze expenditure patterns of discretionary funds by Brazilian congress members.

no code implementations • 21 Jun 2018 • Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

In this paper, we provide a novel interpretation of CA in terms of an information-theoretic quantity called the principal inertia components.

no code implementations • 16 Feb 2018 • Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, Flavio P. Calmon

Given a pair of random variables $(X, Y)\sim P_{XY}$ and two convex functions $f_1$ and $f_2$, we introduce two bottleneck functionals as the lower and upper boundaries of the two-dimensional convex set that consists of the pairs $\left(I_{f_1}(W; X), I_{f_2}(W; Y)\right)$, where $I_f$ denotes $f$-information and $W$ varies over the set of all discrete random variables satisfying the Markov condition $W \to X \to Y$.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.