Search Results for author: Hongseok Namkoong

Found 17 papers, 8 papers with code

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

2 code implementations10 Mar 2022 Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, Ludwig Schmidt

In this paper, we revisit the second step of this procedure in the context of fine-tuning large pre-trained models, where fine-tuned models often appear to lie in a single low error basin.

 Ranked #1 on Image Classification on ImageNet V2 (using extra training data)

Domain Generalization Image Classification +1

Evaluating model performance under worst-case subpopulations

no code implementations NeurIPS 2021 Mike Li, Hongseok Namkoong, Shangzhou Xia

The performance of ML models degrades when the training population is different from that seen under operation.

Robust fine-tuning of zero-shot models

1 code implementation4 Sep 2021 Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, Ludwig Schmidt

Compared to standard fine-tuning, WiSE-FT provides large accuracy improvements under distribution shift, while preserving high accuracy on the target distribution.

Transfer Learning

Distilled Thompson Sampling: Practical and Efficient Thompson Sampling via Imitation Learning

no code implementations29 Nov 2020 Hongseok Namkoong, Samuel Daulton, Eytan Bakshy

We propose a novel imitation-learning-based algorithm that distills a TS policy into an explicit policy representation by performing posterior inference and optimization offline.

Action Generation Decision Making +1

Distributionally Robust Losses for Latent Covariate Mixtures

no code implementations28 Jul 2020 John Duchi, Tatsunori Hashimoto, Hongseok Namkoong

While modern large-scale datasets often consist of heterogeneous subpopulations---for example, multiple demographic groups or multiple text corpora---the standard practice of minimizing average loss fails to guarantee uniformly low losses across all subpopulations.

Assessing External Validity Over Worst-case Subpopulations

1 code implementation5 Jul 2020 Sookyo Jeong, Hongseok Namkoong

Study populations are typically sampled from limited points in space and time, and marginalized groups are underrepresented.

Causal Inference

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

1 code implementation NeurIPS 2020 Hongseok Namkoong, Ramtin Keramati, Steve Yadlowsky, Emma Brunskill

We assess robustness of OPE methods under unobserved confounding by developing worst-case bounds on the performance of an evaluation policy.

Decision Making

In-silico Risk Analysis of Personalized Artificial Pancreas Controllers via Rare-event Simulation

no code implementations2 Dec 2018 Matthew O'Kelly, Aman Sinha, Justin Norden, Hongseok Namkoong

Modern treatments for Type 1 diabetes (T1D) use devices known as artificial pancreata (APs), which combine an insulin pump with a continuous glucose monitor (CGM) operating in a closed-loop manner to control blood glucose levels.

Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation

2 code implementations NeurIPS 2018 Matthew O'Kelly, Aman Sinha, Hongseok Namkoong, John Duchi, Russ Tedrake

While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing.

Autonomous Driving

Learning Models with Uniform Performance via Distributionally Robust Optimization

no code implementations20 Oct 2018 John Duchi, Hongseok Namkoong

A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects.

Stochastic Optimization

Fairness Without Demographics in Repeated Loss Minimization

1 code implementation ICML 2018 Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang

Machine learning models (e. g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity---minority groups (e. g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss.


Generalizing to Unseen Domains via Adversarial Data Augmentation

2 code implementations NeurIPS 2018 Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, Silvio Savarese

Only using training data from a single source distribution, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model.

Data Augmentation Semantic Segmentation

Certifying Some Distributional Robustness with Principled Adversarial Training

no code implementations ICLR 2018 Aman Sinha, Hongseok Namkoong, Riccardo Volpi, John Duchi

Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms.

Adaptive Sampling Probabilities for Non-Smooth Optimization

no code implementations ICML 2017 Hongseok Namkoong, Aman Sinha, Steve Yadlowsky, John C. Duchi

Standard forms of coordinate and stochastic gradient methods do not adapt to structure in data; their good behavior under random sampling is predicated on uniformity in data.

Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences

no code implementations NeurIPS 2016 Hongseok Namkoong, John C. Duchi

We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance.

Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach

no code implementations11 Oct 2016 John Duchi, Peter Glynn, Hongseok Namkoong

We study statistical inference and distributionally robust solution methods for stochastic optimization problems, focusing on confidence intervals for optimal values and solutions that achieve exact coverage asymptotically.

Stochastic Optimization

Variance-based regularization with convex objectives

1 code implementation NeurIPS 2017 John Duchi, Hongseok Namkoong

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error.

General Classification Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.