1 code implementation • 29 May 2025 • Jiashuo Liu, Tianyu Wang, Henry Lam, Hongseok Namkoong, Jose Blanchet
We introduce dro, an open-source Python library for distributionally robust optimization (DRO) for regression and classification problems.
no code implementations • 13 May 2025 • Olivier C. Pasche, Henry Lam, Sebastian Engelke
The advantages of this extreme conformal prediction method are illustrated in a simulation study and in an application to flood risk forecasting.
no code implementations • 4 Apr 2025 • Liviu Aolaritei, Bart P. G. Van Parys, Henry Lam, Michael I. Jordan
Importance Sampling (IS) is a widely used variance reduction technique for enhancing the efficiency of Monte Carlo methods, particularly in rare-event simulation and related applications.
no code implementations • 1 Mar 2025 • Adam N. Elmachtoub, Henry Lam, Haixiang Lan, Haofeng Zhang
Data-driven optimization aims to translate a machine learning model into decision-making by optimizing decisions on estimated costs.
no code implementations • 15 Dec 2024 • Fengpei Li, Haoxian Chen, Jiahe Lin, Arkin Gupta, Xiaowei Tan, Honglei Zhao, Gang Xu, Yuriy Nevmyvaka, Agostino Capponi, Henry Lam
For many complex simulation tasks spanning areas such as healthcare, engineering, and finance, Monte Carlo (MC) methods are invaluable due to their unbiased estimates and precise error quantification.
1 code implementation • 9 Oct 2024 • Yibo Zeng, Jiashuo Liu, Henry Lam, Hongseok Namkoong
Since it is impossible to generalize to a completely new and unknown domain, we study models that are easy to adapt to the target domain even with few labeled examples.
no code implementations • 20 Jun 2024 • Ziyi Huang, Henry Lam, Haofeng Zhang
To fill this gap, we propose a theoretical framework to analyze the impact of approximate inference in stochastic linear bandits and conduct regret analysis on two Bayesian bandit algorithms, Linear Thompson sampling (LinTS) and the extension of Bayesian Upper Confidence Bound, namely Linear Bayesian Upper Confidence Bound (LinBUCB).
no code implementations • 23 May 2024 • Haoxian Chen, Hanyang Zhao, Henry Lam, David Yao, Wenpin Tang
Direct Preference Optimization (DPO) has recently emerged as a popular approach to improve reinforcement learning with human feedback (RLHF), leading to better techniques to fine-tune large language models (LLM).
1 code implementation • 23 May 2024 • Huajie Qian, Donghao Ying, Henry Lam, Wotao Yin
Ensemble learning is a popular technique to improve the accuracy of machine learning models.
1 code implementation • 16 Jan 2024 • Zhepeng Cen, Zuxin Liu, Zitong Wang, Yihang Yao, Henry Lam, Ding Zhao
Offline reinforcement learning (RL) offers a promising direction for learning policies from pre-collected datasets without requiring further interactions with the environment.
no code implementations • 17 Oct 2023 • Henry Lam, Zitong Wang
Stochastic gradient descent (SGD) or stochastic approximation has been widely used in model training and stochastic optimization.
no code implementations • 15 Oct 2023 • Haoxian Chen, Henry Lam
Its key idea is to use a surrogate model to approximate the objective and, importantly, quantify the associated uncertainty that allows a sequential search of query points that balance exploitation-exploration.
no code implementations • 24 Jun 2023 • Zhenyuan Liu, Bart P. G. Van Parys, Henry Lam
In data-driven optimization, sample average approximation (SAA) is known to suffer from the so-called optimizer's curse that causes an over-optimistic evaluation of the solution performance.
no code implementations • 16 Jun 2023 • Garud Iyengar, Henry Lam, Tianyu Wang
We develop a general bias correction approach, building on what we call Optimizer's Information Criterion (OIC), that directly approximates the first-order bias and does not require solving any additional optimization problems.
1 code implementation • NeurIPS 2023 • Ziyi Huang, Henry Lam, Haofeng Zhang
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models.
1 code implementation • 28 May 2023 • Yu Chen, Fengpei Li, Anderson Schneider, Yuriy Nevmyvaka, Asohan Amarasingham, Henry Lam
Then we proposed a robust and computationally-efficient method modified from MLE that does not rely on the prior estimation of the heterogeneous intensity and is thus applicable in a data-limited regime (e. g., few-shot, no repeated observations).
1 code implementation • 13 Apr 2023 • Adam N. Elmachtoub, Henry Lam, Haofeng Zhang, Yunfan Zhao
In this paper, we show that a reverse behavior appears when the model class is well-specified and there is sufficient data.
no code implementations • 3 Dec 2022 • Garud Iyengar, Henry Lam, Tianyu Wang
We propose a simple approach in which the distribution of random perturbations is approximated using a parametric family of distributions.
no code implementations • 22 Oct 2022 • Henry Lam, Kaizheng Wang, Yuhang Wu, Yichen Zhang
We study the problem of multi-task non-smooth optimization that arises ubiquitously in statistical learning, decision-making and risk management.
no code implementations • 21 Oct 2022 • Mengdi Xu, Peide Huang, Yaru Niu, Visak Kumar, JieLin Qiu, Chao Fang, Kuan-Hui Lee, Xuewei Qi, Henry Lam, Bo Li, Ding Zhao
One key challenge for multi-task Reinforcement learning (RL) in practice is the absence of task indicators.
no code implementations • 9 Jun 2022 • Ziyi Huang, Henry Lam, Haofeng Zhang
To overcome these restrictions, we study conditional generative models for aleatoric uncertainty estimation.
no code implementations • 4 Apr 2022 • Mansur Arief, Zhepeng Cen, Zhenyuan Liu, Zhiyuang Huang, Henry Lam, Bo Li, Ding Zhao
In this work, we present Deep Importance Sampling (Deep IS) framework that utilizes a deep neural network to obtain an efficient IS that is on par with the state-of-the-art, capable of reducing the required sample size 43 times smaller than the naive sampling method to achieve 10% relative error and while producing an estimate that is much less conservative.
1 code implementation • NeurIPS 2023 • Ziyi Huang, Henry Lam, Amirhossein Meisami, Haofeng Zhang
Previous research only indicates a negative theoretical result: Thompson sampling could have a worst-case linear regret $\Omega(T)$ with a constant threshold on the inference error measured by one $\alpha$-divergence.
no code implementations • 3 Dec 2021 • Yuanlu Bai, Henry Lam, Svitlana Vyetrenko, Tucker Balch
Multi-agent simulation is commonly used across multiple disciplines, specifically in artificial intelligence in recent years, which creates an environment for downstream machine learning or reinforcement learning tasks.
1 code implementation • 3 Nov 2021 • Mansur Arief, Yuanlu Bai, Wenhao Ding, Shengyi He, Zhiyuan Huang, Henry Lam, Ding Zhao
Rare-event simulation techniques, such as importance sampling (IS), constitute powerful tools to speed up challenging estimation of rare catastrophic events.
no code implementations • 23 Oct 2021 • Ziyi Huang, Henry Lam, Haofeng Zhang
Uncertainty quantification is at the core of the reliability and robustness of machine learning.
no code implementations • 29 Sep 2021 • Elioth Sanabria, David Yao, Henry Lam
In this paper, we show that even for problems with large state space, when the solution policy of the MDP can be represented by a tree-like structure, our proposed algorithm retrieves a tree of the solution policy of the MDP in computationally tractable time.
no code implementations • 21 Jun 2021 • Yibo Zeng, Henry Lam
In contrast to the hypothesis class complexity in ERM, our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function.
1 code implementation • 19 Jun 2021 • Mengdi Xu, Peide Huang, Fengpei Li, Jiacheng Zhu, Xuewei Qi, Kentaro Oguchi, Zhiyuan Huang, Henry Lam, Ding Zhao
Evaluating rare but high-stakes events is one of the main challenges in obtaining reliable reinforcement learning policies, especially in large or infinite state/action spaces where limited scalability dictates a prohibitively large number of testing iterations.
no code implementations • 27 May 2021 • Yuanlu Bai, Tucker Balch, Haoxian Chen, Danial Dervovic, Henry Lam, Svitlana Vyetrenko
Stochastic simulation aims to compute output performance for complex models that lack analytical tractability.
no code implementations • 26 Feb 2021 • Haoxian Chen, Ziyi Huang, Henry Lam, Huajie Qian, Haofeng Zhang
We study the generation of prediction intervals in regression for uncertainty quantification.
no code implementations • 1 Jan 2021 • Ziyi Huang, Henry Lam, Haofeng Zhang
Deep learning has achieved state-of-the-art performance to generate high-quality prediction intervals (PIs) for uncertainty quantification in regression tasks.
no code implementations • 10 Dec 2020 • Haidong Li, Henry Lam, Zhe Liang, Yijie Peng
We consider a context-dependent ranking and selection problem.
Methodology
1 code implementation • 10 Dec 2020 • Haidong Li, Henry Lam, Yijie Peng
We consider a simulation optimization problem for a context-dependent decision-making.
Decision Making
Methodology
no code implementations • 10 Oct 2020 • Yuanlu Bai, Zhiyuan Huang, Henry Lam, Ding Zhao
We study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests.
no code implementations • 18 Jul 2020 • Jose Blanchet, Henry Lam, Yang Liu, Ruodu Wang
We discuss relevant applications in risk management and economics.
2 code implementations • 28 Jun 2020 • Mansur Arief, Zhiyuan Huang, Guru Koushik Senthil Kumar, Yuanlu Bai, Shengyi He, Wenhao Ding, Henry Lam, Ding Zhao
Evaluating the reliability of intelligent physical systems against rare safety-critical events poses a huge testing burden for real-world applications.
no code implementations • 14 Oct 2019 • Henry Lam, Fengpei Li, Siddharth Prusty
In many learning problems, the training and testing data follow different distributions and a particularly common situation is the \textit{covariate shift}.
no code implementations • ICLR 2020 • YI Zhu, Jing Dong, Henry Lam
We investigate statistical uncertainty quantification for reinforcement learning (RL) and its implications in exploration policy.
no code implementations • 19 Apr 2019 • Zhiyuan Huang, Mansur Arief, Henry Lam, Ding Zhao
These Monte Carlo samples are generated from stochastic input models constructed based on real-world data.
no code implementations • 1 Oct 2017 • Zhiyuan Huang, Yaohui Guo, Henry Lam, Ding Zhao
The distribution used in sampling is pivotal to the performance of the method, but building a suitable distribution requires case-by-case analysis.
no code implementations • 19 Oct 2016 • Michael Minyi Zhang, Henry Lam, Lizhen Lin
Effective and accurate model selection is an important problem in modern data analysis.
no code implementations • 8 Jul 2015 • Qinxun Bai, Henry Lam, Stan Sclaroff
We propose a Bayesian approach for recursively estimating the classifier weights in online learning of a classifier ensemble.
no code implementations • 23 Jun 2014 • Henry Lam, Zhenming Liu
We consider a non-stochastic online learning approach to price financial options by modeling the market dynamic as a repeated game between the nature (adversary) and the investor.