Search Results for author: R. Bhushan Gopaluni

Found 16 papers, 3 papers with code

Deep Hankel matrices with random elements

1 code implementation23 Apr 2024 Nathan P. Lawrence, Philip D. Loewen, Shuyuan Wang, Michael G. Forbes, R. Bhushan Gopaluni

Willems' fundamental lemma enables a trajectory-based characterization of linear systems through data-based Hankel matrices.

LEMMA

Stabilizing reinforcement learning control: A modular framework for optimizing over all stable behavior

1 code implementation21 Oct 2023 Nathan P. Lawrence, Philip D. Loewen, Shuyuan Wang, Michael G. Forbes, R. Bhushan Gopaluni

For the training of reinforcement learning agents, the set of all stable linear operators is given explicitly through a matrix factorization approach.

reinforcement-learning

Reinforcement Learning with Partial Parametric Model Knowledge

no code implementations26 Apr 2023 Shuyuan Wang, Philip D. Loewen, Nathan P. Lawrence, Michael G. Forbes, R. Bhushan Gopaluni

We adapt reinforcement learning (RL) methods for continuous control to bridge the gap between complete ignorance and perfect knowledge of the environment.

Continuous Control reinforcement-learning +1

A modular framework for stabilizing deep reinforcement learning control

no code implementations7 Apr 2023 Nathan P. Lawrence, Philip D. Loewen, Shuyuan Wang, Michael G. Forbes, R. Bhushan Gopaluni

We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Kucera parameterization to define the search domain.

reinforcement-learning

Data Quality Over Quantity: Pitfalls and Guidelines for Process Analytics

no code implementations11 Nov 2022 Lim C. Siang, Shams Elnawawi, Lee D. Rippon, Daniel L. O'Connor, R. Bhushan Gopaluni

A significant portion of the effort involved in advanced process control, process analytics, and machine learning involves acquiring and preparing data.

Time Series Time Series Analysis

Modern Machine Learning Tools for Monitoring and Control of Industrial Processes: A Survey

no code implementations22 Sep 2022 R. Bhushan Gopaluni, Aditya Tulsyan, Benoit Chachuat, Biao Huang, Jong Min Lee, Faraz Amjad, Seshu Kumar Damarla, Jong Woo Kim, Nathan P. Lawrence

Over the last ten years, we have seen a significant increase in industrial data, tremendous improvement in computational power, and major theoretical advances in machine learning.

Meta-Reinforcement Learning for Adaptive Control of Second Order Systems

no code implementations19 Sep 2022 Daniel G. McClement, Nathan P. Lawrence, Michael G. Forbes, Philip D. Loewen, Johan U. Backström, R. Bhushan Gopaluni

In this work, we briefly reintroduce our methodology and demonstrate how it can be extended to proportional-integral-derivative controllers and second order systems.

Meta-Learning Meta Reinforcement Learning +2

Meta-Reinforcement Learning for the Tuning of PI Controllers: An Offline Approach

no code implementations17 Mar 2022 Daniel G. McClement, Nathan P. Lawrence, Johan U. Backstrom, Philip D. Loewen, Michael G. Forbes, R. Bhushan Gopaluni

In tests reported here, the meta-RL agent was trained entirely offline on first order plus time delay systems, and produced excellent results on novel systems drawn from the same distribution of process dynamics used for training.

Meta-Learning Meta Reinforcement Learning +2

Deep Reinforcement Learning with Shallow Controllers: An Experimental Application to PID Tuning

no code implementations13 Nov 2021 Nathan P. Lawrence, Michael G. Forbes, Philip D. Loewen, Daniel G. McClement, Johan U. Backstrom, R. Bhushan Gopaluni

In addition to its simplicity, this approach has several appealing features: No additional hardware needs to be added to the control system, since a PID controller can easily be implemented through a standard programmable logic controller; the control law can easily be initialized in a "safe'' region of the parameter space; and the final product -- a well-tuned PID controller -- has a form that practitioners can reason about and deploy with confidence.

reinforcement-learning Reinforcement Learning (RL)

Almost Surely Stable Deep Dynamics

1 code implementation26 Mar 2021 Nathan P. Lawrence, Philip D. Loewen, Michael G. Forbes, Johan U. Backström, R. Bhushan Gopaluni

We introduce a method for learning provably stable deep neural network based dynamic models from observed data.

A Meta-Reinforcement Learning Approach to Process Control

no code implementations25 Mar 2021 Daniel G. McClement, Nathan P. Lawrence, Philip D. Loewen, Michael G. Forbes, Johan U. Backström, R. Bhushan Gopaluni

Meta-learning is appealing for process control applications because the perturbations to a process required to train an AI controller can be costly and unsafe.

Meta-Learning Meta Reinforcement Learning +2

On-line Bayesian parameter estimation in general non-linear state-space models: A tutorial and new results

no code implementations12 Jul 2013 Aditya Tulsyan, Biao Huang, R. Bhushan Gopaluni, J. Fraser Forbes

The simultaneous estimation is performed by filtering an extended vector of states and parameters using an adaptive sequential-importance-resampling (SIR) filter with a kernel density estimation method.

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.