Search Results for author: Chi-Heng Lin

Found 10 papers, 4 papers with code

Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance

no code implementations18 Feb 2024 Chiraag Kaushik, Ran Liu, Chi-Heng Lin, Amrit Khera, Matthew Y Jin, Wenrui Ma, Vidya Muthukumar, Eva L Dyer

Classification models are expected to perform equally well for different classes, yet in practice, there are often large gaps in their performance.

Data Augmentation

Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out

no code implementations22 Jun 2022 Jun-Kun Wang, Chi-Heng Lin, Andre Wibisono, Bin Hu

An additional condition needs to be satisfied for the acceleration result of HB beyond quadratics in this work, which naturally holds when the dimension is one or, more broadly, when the Hessian is diagonal.

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

1 code implementation NeurIPS 2021 Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer

Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).

Escaping Saddle Points Faster with Stochastic Momentum

no code implementations ICLR 2020 Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy

At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e. g. ADAM [KB15], AMSGrad [RKK18], etc.

Open-Ended Question Answering Stochastic Optimization

Making transport more robust and interpretable by moving data through a small number of anchor points

1 code implementation21 Dec 2020 Chi-Heng Lin, Mehdi Azabou, Eva L. Dyer

Optimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities.

A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network

no code implementations4 Oct 2020 Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy

Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of $(1-\Theta(\frac{1}{\sqrt{\kappa'}}))^t$.

Bayesian optimization for modular black-box systems with switching costs

no code implementations4 Jun 2020 Chi-Heng Lin, Joseph D. Miano, Eva L. Dyer

In this work, we propose a new algorithm for switch cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO).

Bayesian Optimization Image Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.