Search Results for author: Quoc Phong Nguyen

Found 13 papers, 5 papers with code

Estimation of Recursive Route Choice Models with Incomplete Trip Observations

no code implementations27 Apr 2022 Tien Mai, The Viet Bui, Quoc Phong Nguyen, Tho V. Le

This work concerns the estimation of recursive route choice models in the situation that the trip observations are incomplete, i. e., there are unconnected links (or nodes) in the observations.

Rectified Max-Value Entropy Search for Bayesian Optimization

no code implementations28 Feb 2022 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

Although the existing max-value entropy search (MES) is based on the widely celebrated notion of mutual information, its empirical performance can suffer due to two misconceptions whose implications on the exploration-exploitation trade-off are investigated in this paper.

Bayesian Optimization Misconceptions

Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten

no code implementations28 Feb 2022 Quoc Phong Nguyen, Ryutaro Oikawa, Dinil Mon Divakaran, Mun Choon Chan, Bryan Kian Hsiang Low

Similarly, MCU can be used to erase the lineage of a user's personal data from trained ML models, thus upholding a user's "right to be forgotten".

Machine Unlearning

Optimizing Conditional Value-At-Risk of Black-Box Functions

1 code implementation NeurIPS 2021 Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents two Bayesian optimization (BO) algorithms with theoretical performance guarantee to maximize the conditional value-at-risk (CVaR) of a black-box function: CV-UCB and CV-TS which are based on the well-established principle of optimism in the face of uncertainty and Thompson sampling, respectively.

Bayesian Optimization Thompson Sampling

Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization

1 code implementation30 Jul 2021 Quoc Phong Nguyen, Zhaoxuan Wu, Bryan Kian Hsiang Low, Patrick Jaillet

Information-based Bayesian optimization (BO) algorithms have achieved state-of-the-art performance in optimizing a black-box objective function.

Bayesian Optimization Face Recognition

Value-at-Risk Optimization with Gaussian Processes

no code implementations13 May 2021 Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

Value-at-risk (VaR) is an established measure to assess risks in critical real-world applications with random environmental factors.

Gaussian Processes Portfolio Optimization

An Information-Theoretic Framework for Unifying Active Learning Problems

1 code implementation19 Dec 2020 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant.

Active Learning Bayesian Optimization

Top-$k$ Ranking Bayesian Optimization

1 code implementation19 Dec 2020 Quoc Phong Nguyen, Sebastian Tay, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents a novel approach to top-$k$ ranking Bayesian optimization (top-$k$ ranking BO) which is a practical and significant generalization of preferential BO to handle top-$k$ ranking and tie/indifference observations.

Bayesian Optimization

Variational Bayesian Unlearning

no code implementations NeurIPS 2020 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data.

Variational Inference

Inverse Reinforcement Learning with Missing Data

no code implementations16 Nov 2019 Tien Mai, Quoc Phong Nguyen, Kian Hsiang Low, Patrick Jaillet

We consider the problem of recovering an expert's reward function with inverse reinforcement learning (IRL) when there are missing/incomplete state-action pairs or observations in the demonstrated trajectories.

reinforcement-learning Reinforcement Learning (RL)

Inverse Reinforcement Learning with Locally Consistent Reward Functions

no code implementations NeurIPS 2015 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

By representing our IRL problem with a probabilistic graphical model, an expectation-maximization (EM) algorithm can be devised to iteratively learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories.

Clustering reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.