no code implementations • 27 Apr 2022 • Tien Mai, The Viet Bui, Quoc Phong Nguyen, Tho V. Le
This work concerns the estimation of recursive route choice models in the situation that the trip observations are incomplete, i. e., there are unconnected links (or nodes) in the observations.
no code implementations • 28 Feb 2022 • Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
Although the existing max-value entropy search (MES) is based on the widely celebrated notion of mutual information, its empirical performance can suffer due to two misconceptions whose implications on the exploration-exploitation trade-off are investigated in this paper.
no code implementations • 28 Feb 2022 • Quoc Phong Nguyen, Ryutaro Oikawa, Dinil Mon Divakaran, Mun Choon Chan, Bryan Kian Hsiang Low
Similarly, MCU can be used to erase the lineage of a user's personal data from trained ML models, thus upholding a user's "right to be forgotten".
1 code implementation • NeurIPS 2021 • Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet
This paper presents two Bayesian optimization (BO) algorithms with theoretical performance guarantee to maximize the conditional value-at-risk (CVaR) of a black-box function: CV-UCB and CV-TS which are based on the well-established principle of optimism in the face of uncertainty and Thompson sampling, respectively.
1 code implementation • 30 Jul 2021 • Quoc Phong Nguyen, Zhaoxuan Wu, Bryan Kian Hsiang Low, Patrick Jaillet
Information-based Bayesian optimization (BO) algorithms have achieved state-of-the-art performance in optimizing a black-box objective function.
no code implementations • 13 May 2021 • Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet
Value-at-risk (VaR) is an established measure to assess risks in critical real-world applications with random environmental factors.
1 code implementation • 19 Dec 2020 • Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant.
1 code implementation • 19 Dec 2020 • Quoc Phong Nguyen, Sebastian Tay, Bryan Kian Hsiang Low, Patrick Jaillet
This paper presents a novel approach to top-$k$ ranking Bayesian optimization (top-$k$ ranking BO) which is a practical and significant generalization of preferential BO to handle top-$k$ ranking and tie/indifference observations.
no code implementations • NeurIPS 2020 • Sreejith Balakrishnan, Quoc Phong Nguyen, Bryan Kian Hsiang Low, Harold Soh
The problem of inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration.
no code implementations • NeurIPS 2020 • Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data.
no code implementations • 16 Nov 2019 • Tien Mai, Quoc Phong Nguyen, Kian Hsiang Low, Patrick Jaillet
We consider the problem of recovering an expert's reward function with inverse reinforcement learning (IRL) when there are missing/incomplete state-action pairs or observations in the demonstrated trajectories.
1 code implementation • 15 Mar 2019 • Quoc Phong Nguyen, Kar Wai Lim, Dinil Mon Divakaran, Kian Hsiang Low, Mun Choon Chan
This paper looks into the problem of detecting network anomalies by analyzing NetFlow records.
no code implementations • NeurIPS 2015 • Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
By representing our IRL problem with a probabilistic graphical model, an expectation-maximization (EM) algorithm can be devised to iteratively learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories.