RL-Controller: a reinforcement learning framework for active structural control

To maintain structural integrity and functionality during the designed life cycle of a structure, engineers are expected to accommodate for natural hazards as well as operational load levels. Active control systems are an efficient solution for structural response control when a structure is subjected to unexpected extreme loads. However, development of these systems through traditional means is limited by their model dependent nature. Recent advancements in adaptive learning methods, in particular, reinforcement learning (RL), for real-time decision making problems, along with rapid growth in high-performance computational resources, help structural engineers to transform the classic model-based active control problem to a purely data-driven one. In this paper, we present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment. The RL-Controller includes attributes and functionalities that are defined to model active structural control mechanisms in detail. We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts (ISD) when subjected to strong ground motions. In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies that yield higher performance, e.g., 25% more ISD reductions on average with respect to LQG, without using prior information about the mechanical properties of the system.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here