ESIC Seminars Spring 2019
RESILIENT CYBER PHYSICAL SYSTEMS
DR. ABHISHEK DUBEY and DR. GAUTAM BISWAS
Electrical Engineering and Computer Science, Vanderbilt University
Tuesday, April 23
11:00am – 12:00pm
ETRL 101
ALSO AVAILABLE via:
Overview
Resilience, defined as the ability to recover from failures, is a key requirement for cyber-physical systems (CPS); especially critical systems such as power networks. Traditional approaches relying on redundancy and design diversity leave the system vulnerable against common mode failures and latent software faults. Thus, CPS have to integrate online systems health management, i.e., online anomaly detection, fault-source(s) isolation, and when possible, recovery from failures. During this presentation we will describe the latest techniques developed by our teams for integrating systems health management into cyber-physical systems like transportation networks and power networks. The key idea is to produce unsupervised classifiers for efficient online anomaly detection, use combined model-driven and data-driven techniques for fault-source isolation and design a systems architecture that is built from ground up for a highly resilient, hierarchical fault management scheme. The key idea is to make each layer robust such that faults from the layer below it cannot propagate to the layer above and cause a failure. The layer above can assume certain behavior about the layer below and if that is violated, the layer below should inform the layer above. During the discussions we will show examples from the domain of transactive energy, aircraft, and transportation networks to illustrate the methodology.
WIDE-AREA CONTROL USING REINFORCEMENT LEARNING, BUT IN SEVERELY-REDUCED DIMENSION
DR. ARANYA CHAKRABORTTY
Wednesday, May 1
11:00am to 12:00pm
ETRL 101
ALSO AVAILABLE via AMS #776080
Overview
Reinforcement Learning (RL) is an effective way of designing model-free linear quadratic regulator (LQR) controller for linear time-invariant (LTI) networks with unknown state-space models. However, when the network size is large, conventional RL can result in unacceptably long learning times. In this talk I will present some recent results on how we can resolve this problem by developing an alternative approach that combines dimensionality reduction with RL theory. The approach is to construct a compressed state vector by projecting the measured state through a projective matrix, which is constructed offline using probing signals. This matrix can be viewed as an empirical controllability Gramian that captures the level of redundancy in the open-loop network model. Next, a RL-controller is learned using the compressed state instead of the original state such that the resultant cost is close to the optimal LQR cost. The talk will end by highlighting the potential use of this method, with associated benefits and challenges, for wide-area control of power systems.