Search Results for author: Kian Hsiang Low

Found 28 papers, 7 papers with code

R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games

no code implementations ICML 2020 Zhongxiang Dai, Yizhou Chen, Kian Hsiang Low, Patrick Jaillet, Teck-Hua Ho

This paper presents a recursive reasoning formalism of Bayesian optimization (BO) to model the reasoning process in the interactions between boundedly rational, self-interested agents with unknown, complex, and costly-to-evaluate payoff functions in repeated games, which we call Recursive Reasoning-Based BO (R2-B2).

Bayesian Optimization Multi-agent Reinforcement Learning

Nonmyopic Gaussian Process Optimization with Macro-Actions

no code implementations22 Feb 2020 Dmitrii Kharkovskii, Chun Kai Ling, Kian Hsiang Low

This paper presents a multi-staged approach to nonmyopic adaptive Gaussian process optimization (GPO) for Bayesian optimization (BO) of unknown, highly complex objective functions that, in contrast to existing nonmyopic adaptive BO algorithms, exploits the notion of macro-actions for scaling up to a further lookahead to match up to a larger available budget.

Bayesian Optimization

Scalable Variational Bayesian Kernel Selection for Sparse Gaussian Process Regression

no code implementations5 Dec 2019 Tong Teng, Jie Chen, Yehong Zhang, Kian Hsiang Low

To achieve this, we represent the probabilistic kernel as an additional variational variable in a variational inference (VI) framework for SGPR models where its posterior belief is learned together with that of the other variational variables (i. e., inducing variables and kernel hyperparameters).

regression Stochastic Optimization +1

Inverse Reinforcement Learning with Missing Data

no code implementations16 Nov 2019 Tien Mai, Quoc Phong Nguyen, Kian Hsiang Low, Patrick Jaillet

We consider the problem of recovering an expert's reward function with inverse reinforcement learning (IRL) when there are missing/incomplete state-action pairs or observations in the demonstrated trajectories.

reinforcement-learning Reinforcement Learning (RL)

Implicit Posterior Variational Inference for Deep Gaussian Processes

1 code implementation NeurIPS 2019 Haibin Yu, Yizhou Chen, Zhongxiang Dai, Kian Hsiang Low, Patrick Jaillet

This paper presents an implicit posterior variational inference (IPVI) framework for DGPs that can ideally recover an unbiased posterior belief and still preserve time efficiency.

Gaussian Processes Variational Inference

Bayesian Optimization with Binary Auxiliary Information

no code implementations17 Jun 2019 Yehong Zhang, Zhongxiang Dai, Kian Hsiang Low

This paper presents novel mixed-type Bayesian optimization (BO) algorithms to accelerate the optimization of a target objective function by exploiting correlated auxiliary information of binary type that can be more cheaply obtained, such as in policy search for reinforcement learning and hyperparameter tuning of machine learning models with early stopping.

Bayesian Optimization Vocal Bursts Type Prediction

Towards Robust ResNet: A Small Step but A Giant Leap

no code implementations28 Feb 2019 Jingfeng Zhang, Bo Han, Laura Wynter, Kian Hsiang Low, Mohan Kankanhalli

Our analytical studies reveal that the step factor h in the Euler method is able to control the robustness of ResNet in both its training and generalization.

Collective Online Learning of Gaussian Processes in Massive Multi-Agent Systems

no code implementations23 May 2018 Trong Nghia Hoang, Quang Minh Hoang, Kian Hsiang Low, Jonathan How

Distributed machine learning (ML) is a modern computation paradigm that divides its workload into independent tasks that can be simultaneously achieved by multiple machines (i. e., agents) for better scalability.

Gaussian Processes

Decentralized High-Dimensional Bayesian Optimization with Factor Graphs

no code implementations19 Nov 2017 Trong Nghia Hoang, Quang Minh Hoang, Ruofei Ouyang, Kian Hsiang Low

This paper presents a novel decentralized high-dimensional Bayesian optimization (DEC-HBO) algorithm that, in contrast to existing HBO algorithms, can exploit the interdependent effects of various input components on the output of the unknown objective function f for boosting the BO performance and still preserve scalability in the number of input dimensions without requiring prior knowledge or the existence of a low (effective) dimension of the input space.

Bayesian Optimization Vocal Bursts Intensity Prediction

Gaussian Process Decentralized Data Fusion Meets Transfer Learning in Large-Scale Distributed Cooperative Perception

no code implementations16 Nov 2017 Ruofei Ouyang, Kian Hsiang Low

To achieve this, we propose a novel transfer learning mechanism for a team of agents capable of sharing and transferring information encapsulated in a summary based on a support set to that utilizing a different support set with some loss that can be theoretically bounded and analyzed.

Transfer Learning

Stochastic Variational Inference for Bayesian Sparse Gaussian Process Regression

no code implementations1 Nov 2017 Haibin Yu, Trong Nghia Hoang, Kian Hsiang Low, Patrick Jaillet

This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the full-rank GPR model enriched with various corresponding correlation structures of the observation noises.

GPR regression +2

A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression

no code implementations18 Nov 2016 Quang Minh Hoang, Trong Nghia Hoang, Kian Hsiang Low

While much research effort has been dedicated to scaling up sparse Gaussian process (GP) models based on inducing variables for big data, little attention is afforded to the other less explored class of low-rank GP approximations that exploit the sparse spectral representation of a GP kernel.

regression Stochastic Optimization

DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks

1 code implementation5 Jan 2016 Jie Fu, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, Tat-Seng Chua

The performance of deep neural networks is well-known to be sensitive to the setting of their hyperparameters.

Multi-Agent Continuous Transportation with Online Balanced Partitioning

no code implementations23 Nov 2015 Chao Wang, Somchaya Liemhetcharat, Kian Hsiang Low

A continuous transportation task is one in which a multi-agent team visits a number of fixed locations, picks up objects, and delivers them to a final destination.

Gaussian Process Planning with Lipschitz Continuous Reward Functions: Towards Unifying Bayesian Optimization, Active Learning, and Beyond

no code implementations21 Nov 2015 Chun Kai Ling, Kian Hsiang Low, Patrick Jaillet

This paper presents a novel nonmyopic adaptive Gaussian process planning (GPP) framework endowed with a general class of Lipschitz continuous reward functions that can unify some active learning/sensing and Bayesian optimization criteria and offer practitioners some flexibility to specify their desired choices for defining new tasks/problems.

Active Learning Bayesian Optimization

Near-Optimal Active Learning of Multi-Output Gaussian Processes

1 code implementation21 Nov 2015 Yehong Zhang, Trong Nghia Hoang, Kian Hsiang Low, Mohan Kankanhalli

This paper addresses the problem of active learning of a multi-output Gaussian process (MOGP) model representing multiple types of coexisting correlated environmental phenomena.

Active Learning Gaussian Processes

Parallel Gaussian Process Regression for Big Data: Low-Rank Representation Meets Markov Approximation

no code implementations17 Nov 2014 Kian Hsiang Low, Jiangbo Yu, Jie Chen, Patrick Jaillet

To improve its scalability, this paper presents a low-rank-cum-Markov approximation (LMA) of the GP model that is novel in leveraging the dual computational advantages stemming from complementing a low-rank approximate representation of the full-rank GP based on a support set of inputs with a Markov approximation of the resulting residual process; the latter approximation is guaranteed to be closest in the Kullback-Leibler distance criterion subject to some constraint and is considerably more refined than that of existing sparse GP models utilizing low-rank representations due to its more relaxed conditional independence assumption (especially with larger data).

regression

Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena

no code implementations9 Aug 2014 Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan, Ali Oran, Patrick Jaillet, John Dolan, Gaurav Sukhatme

The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots.

Parallel Gaussian Process Regression with Low-Rank Covariance Matrix Approximations

no code implementations9 Aug 2014 Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan Tan, Patrick Jaillet

We theoretically guarantee the predictive performances of our proposed parallel GPs to be equivalent to that of some centralized approximate GP regression methods: The computation of their centralized counterparts can be distributed among parallel machines, hence achieving greater time efficiency and scalability.

Gaussian Processes regression

GP-Localize: Persistent Mobile Robot Localization using Online Sparse Gaussian Process Observation Model

no code implementations21 Apr 2014 Nuo Xu, Kian Hsiang Low, Jie Chen, Keng Kiat Lim, Etkin Baris Ozgul

Central to robot exploration and mapping is the task of persistent localization in environmental fields characterized by spatially correlated measurements.

Gaussian Process-Based Decentralized Data Fusion and Active Sensing for Mobility-on-Demand System

no code implementations2 Jun 2013 Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan

This paper presents a novel decentralized data fusion and active sensing algorithm for real-time, fine-grained mobility demand sensing and prediction with a fleet of autonomous robotic vehicles in a MoD system.

Information-Theoretic Approach to Efficient Adaptive Path Planning for Mobile Robotic Environmental Sensing

no code implementations27 May 2013 Kian Hsiang Low, John M. Dolan, Pradeep Khosla

The time complexity of solving MASP approximately depends on the map resolution, which limits its use in large-scale, high-resolution exploration and mapping.

Parallel Gaussian Process Regression with Low-Rank Covariance Matrix Approximations

no code implementations24 May 2013 Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan Tan, Patrick Jaillet

We theoretically guarantee the predictive performances of our proposed parallel GPs to be equivalent to that of some centralized approximate GP regression methods: The computation of their centralized counterparts can be distributed among parallel machines, hence achieving greater time efficiency and scalability.

Gaussian Processes regression

Interactive POMDP Lite: Towards Practical Planning to Predict and Exploit Intentions for Interacting with Self-Interested Agents

1 code implementation18 Apr 2013 Trong Nghia Hoang, Kian Hsiang Low

A key challenge in non-cooperative multi-agent systems is that of developing efficient planning algorithms for intelligent agents to interact and perform effectively among boundedly rational, self-interested agents (e. g., humans).

A General Framework for Interacting Bayes-Optimally with Self-Interested Agents using Arbitrary Parametric Model and Model Prior

no code implementations7 Apr 2013 Trong Nghia Hoang, Kian Hsiang Low

Recent advances in Bayesian reinforcement learning (BRL) have shown that Bayes-optimality is theoretically achievable by modeling the environment's latent dynamics using Flat-Dirichlet-Multinomial (FDM) prior.

Multi-agent Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.