Search Results for author: Wenhan Cao

Found 6 papers, 1 papers with code

Convolutional Bayesian Filtering

no code implementations30 Mar 2024 Wenhan Cao, Shiqi Liu, Chang Liu, Zeyu He, Stephen S. -T. Yau, Shengbo Eben Li

In this paper, we find that by adding an additional event that stipulates an inequality condition, we can transform the conditional probability into a special integration that is analogous to convolution.

Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control

1 code implementation27 Feb 2024 Wenhan Cao, Wei Pan

We prove that the local convergence rates for IntRL using the trapezoidal rule and Bayesian quadrature with a Mat\'ern kernel to be $O(N^{-2})$ and $O(N^{-b})$, where $N$ is the number of evenly-spaced samples and $b$ is the Mat\'ern kernel's smoothness parameter.

Robust Bayesian Inference for Moving Horizon Estimation

no code implementations5 Oct 2022 Wenhan Cao, Chang Liu, Zhiqian Lan, Shengbo Eben Li, Wei Pan, Angelo Alessandri

The accuracy of moving horizon estimation (MHE) suffers significantly in the presence of measurement outliers.

Bayesian Inference Combinatorial Optimization

On the Optimization Landscape of Dynamic Output Feedback: A Case Study for Linear Quadratic Regulator

no code implementations12 Sep 2022 Jingliang Duan, Wenhan Cao, Yang Zheng, Lin Zhao

At the core of our results is the uniqueness of the stationary point of dLQR when it is observable, which is in a concise form of an observer-based controller with the optimal similarity transformation.

Decision Making Policy Gradient Methods

Primal-dual Estimator Learning: an Offline Constrained Moving Horizon Estimation Method with Feasibility and Near-optimality Guarantees

no code implementations6 Apr 2022 Wenhan Cao, Jingliang Duan, Shengbo Eben Li, Chen Chen, Chang Liu, Yu Wang

Both the primal and dual estimators are learned from data using supervised learning techniques, and the explicit sample size is provided, which enables us to guarantee the quality of each learned estimator in terms of feasibility and optimality.

Approximate Optimal Filter for Linear Gaussian Time-invariant Systems

no code implementations9 Mar 2021 Kaiming Tang, Shengbo Eben Li, Yuming Yin, Yang Guan, Jingliang Duan, Wenhan Cao, Jie Li

The equivalence holds given certain conditions about initial state distributions and policy formats, in which the system state is the estimation error, control input is the filter gain, and control objective function is the accumulated estimation error.

Cannot find the paper you are looking for? You can Submit a new open access paper.