Search Results for author: Hsiang Hsu

Found 18 papers, 9 papers with code

MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective

no code implementations23 May 2024 Yizhuo Chen, Chun-Fu Chen, Hsiang Hsu, Shaohan Hu, Marco Pistoia, Tarek Abdelzaher

The growing richness of large-scale datasets has been crucial in driving the rapid advancement and wide adoption of machine learning technologies.

Attribute

OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning

no code implementations6 Feb 2024 Wei-Cheng Huang, Chun-Fu Chen, Hsiang Hsu

We illustrate that a simplified prompt-based method can achieve results comparable to previous state-of-the-art (SOTA) methods equipped with a prompt pool, using much less learnable parameters and lower inference cost.

Class Incremental Learning Incremental Learning

Dropout-Based Rashomon Set Exploration for Efficient Predictive Multiplicity Estimation

no code implementations1 Feb 2024 Hsiang Hsu, Guihong Li, Shaohan Hu, Chun-Fu, Chen

Predictive multiplicity refers to the phenomenon in which classification tasks may admit multiple competing models that achieve almost-equally-optimal performance, yet generate conflicting outputs for individual samples.

Model Selection

Machine Unlearning for Image-to-Image Generative Models

2 code implementations1 Feb 2024 Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu

This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models.

Machine Unlearning

Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models

no code implementations22 Dec 2023 Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu

The rapid growth of machine learning has spurred legislative initiatives such as ``the Right to be Forgotten,'' allowing users to request data removal.

Machine Unlearning

Arbitrariness Lies Beyond the Fairness-Accuracy Frontier

1 code implementation15 Jun 2023 Carol Xuan Long, Hsiang Hsu, Wael Alghamdi, Flavio P. Calmon

Machine learning tasks may admit multiple competing models that achieve similar performance yet produce conflicting outputs for individual samples -- a phenomenon known as predictive multiplicity.

Decision Making Fairness

Arbitrary Decisions are a Hidden Cost of Differentially Private Training

1 code implementation28 Feb 2023 Bogdan Kulynych, Hsiang Hsu, Carmela Troncoso, Flavio P. Calmon

We demonstrate that such randomization incurs predictive multiplicity: for a given input example, the output predicted by equally-private models depends on the randomness used in training.

Privacy Preserving

Rashomon Capacity: A Metric for Predictive Multiplicity in Classification

1 code implementation2 Jun 2022 Hsiang Hsu, Flavio du Pin Calmon

Predictive multiplicity occurs when classification models with statistically indistinguishable performances assign conflicting predictions to individual samples.

Classification Decision Making

Robust Hybrid Learning With Expert Augmentation

1 code implementation8 Feb 2022 Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen

Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.

Data Augmentation valid

CPR: Classifier-Projection Regularization for Continual Learning

1 code implementation ICLR 2021 Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio P. Calmon, Taesup Moon

Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability.

Continual Learning

To Split or Not to Split: The Impact of Disparate Treatment in Classification

no code implementations12 Feb 2020 Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon

To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i. e., classifiers trained and deployed separately on each group) with group-blind classifiers (i. e., classifiers which do not use a sensitive attribute).

Attribute General Classification

Obfuscation via Information Density Estimation

no code implementations17 Oct 2019 Hsiang Hsu, Shahab Asoodeh, Flavio du Pin Calmon

The core of this mechanism relies on a data-driven estimate of the trimmed information density for which we propose a novel estimator, named the trimmed information density estimator (TIDE).

Density Estimation

Correspondence Analysis Using Neural Networks

2 code implementations21 Feb 2019 Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

Correspondence analysis (CA) is a multivariate statistical tool used to visualize and interpret data dependencies.

Epidemiology

Generalizing Correspondence Analysis for Applications in Machine Learning

no code implementations21 Jun 2018 Hsiang Hsu, Salman Salamatian, Flavio P. Calmon

In this paper, we provide a novel interpretation of CA in terms of an information-theoretic quantity called the principal inertia components.

BIG-bench Machine Learning Dimensionality Reduction +2

Generalizing Bottleneck Problems

no code implementations16 Feb 2018 Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, Flavio P. Calmon

Given a pair of random variables $(X, Y)\sim P_{XY}$ and two convex functions $f_1$ and $f_2$, we introduce two bottleneck functionals as the lower and upper boundaries of the two-dimensional convex set that consists of the pairs $\left(I_{f_1}(W; X), I_{f_2}(W; Y)\right)$, where $I_f$ denotes $f$-information and $W$ varies over the set of all discrete random variables satisfying the Markov condition $W \to X \to Y$.

LEMMA

Cannot find the paper you are looking for? You can Submit a new open access paper.