Search Results for author: Hsiang Hsu

Found 11 papers, 4 papers with code

Rashomon Capacity: A Metric for Predictive Multiplicity in Probabilistic Classification

1 code implementation2 Jun 2022 Hsiang Hsu, Flavio du Pin Calmon

Predictive multiplicity occurs when classification models with nearly indistinguishable average performances assign conflicting predictions to individual samples.

Decision Making

Robust Hybrid Learning With Expert Augmentation

no code implementations8 Feb 2022 Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen

Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.

Data Augmentation

CPR: Classifier-Projection Regularization for Continual Learning

1 code implementation ICLR 2021 Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio P. Calmon, Taesup Moon

Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability.

Continual Learning

To Split or Not to Split: The Impact of Disparate Treatment in Classification

no code implementations12 Feb 2020 Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon

To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i. e., classifiers trained and deployed separately on each group) with group-blind classifiers (i. e., classifiers which do not use a sensitive attribute).

General Classification

Obfuscation via Information Density Estimation

no code implementations17 Oct 2019 Hsiang Hsu, Shahab Asoodeh, Flavio du Pin Calmon

The core of this mechanism relies on a data-driven estimate of the trimmed information density for which we propose a novel estimator, named the trimmed information density estimator (TIDE).

Density Estimation

Correspondence Analysis Using Neural Networks

2 code implementations21 Feb 2019 Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

Correspondence analysis (CA) is a multivariate statistical tool used to visualize and interpret data dependencies.


Generalizing Correspondence Analysis for Applications in Machine Learning

no code implementations21 Jun 2018 Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

In this paper, we provide a novel interpretation of CA in terms of an information-theoretic quantity called the principal inertia components.

BIG-bench Machine Learning Dimensionality Reduction +2

Generalizing Bottleneck Problems

no code implementations16 Feb 2018 Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, Flavio P. Calmon

Given a pair of random variables $(X, Y)\sim P_{XY}$ and two convex functions $f_1$ and $f_2$, we introduce two bottleneck functionals as the lower and upper boundaries of the two-dimensional convex set that consists of the pairs $\left(I_{f_1}(W; X), I_{f_2}(W; Y)\right)$, where $I_f$ denotes $f$-information and $W$ varies over the set of all discrete random variables satisfying the Markov condition $W \to X \to Y$.

Cannot find the paper you are looking for? You can Submit a new open access paper.