no code implementations • 10 Mar 2025 • Zachary Ravichandran, Alexander Robey, Vijay Kumar, George J. Pappas, Hamed Hassani
Although the integration of large language models (LLMs) into robotics has unlocked transformative capabilities, it has also introduced significant safety concerns, ranging from average-case LLM errors (e. g., hallucinations) to adversarial jailbreaking attacks, which can produce harmful robot behavior in real-world settings.
no code implementations • 17 Feb 2025 • Tesshu Fujinami, Bruce D. Lee, Nikolai Matni, George J. Pappas
We study the sample efficiency of domain randomization and robust control for the benchmark problem of learning the linear quadratic regulator (LQR).
1 code implementation • 10 Nov 2024 • Hanwen Cao, George J. Pappas, Nikolay Atanasov
We view the unknown data association as a latent variable and apply Expectation Maximization (EM) to obtain a filter with update step in the same form as the Kalman filter but with expanded measurement vector of all potential associations.
1 code implementation • 3 Nov 2024 • Sima Noorani, Orlando Romero, Nicolo Dal Fabbro, Hamed Hassani, George J. Pappas
In this direction, conformal training (ConfTr) by Stutz et al.(2022) is a technique that seeks to minimize the expected prediction set size of a model by simulating CP in-between training updates.
no code implementations • 17 Oct 2024 • Alexander Robey, Zachary Ravichandran, Vijay Kumar, Hamed Hassani, George J. Pappas
Unlike existing, textual attacks on LLM chatbots, RoboPAIR elicits harmful physical actions from LLM-controlled robots, a phenomenon we experimentally demonstrate in three scenarios: (i) a white-box setting, wherein the attacker has full access to the NVIDIA Dolphins self-driving LLM, (ii) a gray-box setting, wherein the attacker has partial access to a Clearpath Robotics Jackal UGV robot equipped with a GPT-4o planner, and (iii) a black-box setting, wherein the attacker has only query access to the GPT-3. 5-integrated Unitree Robotics Go2 robot dog.
no code implementations • 15 Oct 2024 • Thomas T. Zhang, Bruce D. Lee, Ingvar Ziemann, George J. Pappas, Nikolai Matni
We show that when $N \gtrsim C_{\mathrm{dep}} (\mathrm{dim}(\mathcal F) + \mathrm{C}(\mathcal G)/T)$, the excess risk of $\hat f^{(0)} \circ \hat g$ on the target task decays as $\nu_{\mathrm{div}} \big(\frac{\mathrm{dim}(\mathcal F)}{N'} + \frac{\mathrm{C}(\mathcal G)}{N T} \big)$, where $C_{\mathrm{dep}}$ denotes the effect of data dependency, $\nu_{\mathrm{div}}$ denotes an (estimatable) measure of $\textit{task-diversity}$ between the source and target tasks, and $\mathrm C(\mathcal G)$ denotes the complexity of the representation class $\mathcal G$.
no code implementations • 13 Oct 2024 • Kong Yao Chee, Pei-An Hsieh, George J. Pappas, M. Ani Hsieh
Through simulations and physical experiments, we show that incorporating the model into a novel learning-based nonlinear model predictive control (MPC) framework results in substantial performance improvements in terms of trajectory tracking and disturbance rejection.
no code implementations • 3 Oct 2024 • Zachary Ravichandran, Varun Murali, Mariliza Tzes, George J. Pappas, Vijay Kumar
However, existing LLM-enabled planners typically do not consider online planning or complex missions; rather, relevant subtasks and semantics are provided by a pre-built map or a user.
no code implementations • 20 Sep 2024 • Ingvar Ziemann, Nikolai Matni, George J. Pappas
For this situation, we show that there exists no learner using a linear filter which can succesfully learn the random walk unless the filter length exceeds a certain threshold depending on the effective memory length and horizon of the problem.
no code implementations • 31 Aug 2024 • Lars Lindemann, Yiqi Zhao, Xinyi Yu, George J. Pappas, Jyotirmoy V. Deshmukh
We focus on learning-enabled autonomous systems (LEASs) in which the complexity of learning-enabled components (LECs) is a major bottleneck that hampers the use of existing model-based verification and design techniques.
2 code implementations • 22 May 2024 • Sifan Wang, Jacob H Seidman, Shyam Sankaran, Hanwen Wang, George J. Pappas, Paris Perdikaris
Here we introduce the Continuous Vision Transformer (CViT), a novel neural operator architecture that leverages advances in computer vision to address challenges in learning complex physical systems.
no code implementations • 17 May 2024 • Charis Stamouli, Lars Lindemann, George J. Pappas
We propose a shrinking-horizon MPC that guarantees recursive feasibility via a gradual relaxation of the safety constraints as new prediction regions become available online.
no code implementations • 13 Apr 2024 • Bruce D. Lee, Ingvar Ziemann, George J. Pappas, Nikolai Matni
Model-based reinforcement learning is an effective approach for controlling an unknown system.
no code implementations • 11 Apr 2024 • Charis Stamouli, Ingvar Ziemann, George J. Pappas
We study the quadratic prediction error method -- i. e., nonlinear least squares -- for a class of time-varying parametric predictor models satisfying a certain identifiability condition.
3 code implementations • 28 Mar 2024 • Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramer, Hamed Hassani, Eric Wong
To address these challenges, we introduce JailbreakBench, an open-sourced benchmark with the following components: (1) an evolving repository of state-of-the-art adversarial prompts, which we refer to as jailbreak artifacts; (2) a jailbreaking dataset comprising 100 behaviors -- both original and sourced from prior work (Zou et al., 2023; Mazeika et al., 2023, 2024) -- which align with OpenAI's usage policies; (3) a standardized evaluation framework at https://github. com/JailbreakBench/jailbreakbench that includes a clearly defined threat model, system prompts, chat templates, and scoring functions; and (4) a leaderboard at https://jailbreakbench. github. io/ that tracks the performance of attacks and defenses for various LLMs.
no code implementations • 28 Mar 2024 • Yutong He, Alexander Robey, Naoki Murata, Yiding Jiang, Joshua Nathaniel Williams, George J. Pappas, Hamed Hassani, Yuki Mitsufuji, Ruslan Salakhutdinov, J. Zico Kolter
Prompt engineering is effective for controlling the output of text-to-image (T2I) generative models, but it is also laborious due to the need for manually crafted prompts.
no code implementations • 25 Mar 2024 • Nicolò Dal Fabbro, Arman Adibi, H. Vincent Poor, Sanjeev R. Kulkarni, Aritra Mitra, George J. Pappas
We consider a setting in which $N$ agents aim to speedup a common Stochastic Approximation (SA) problem by acting in parallel and communicating with a central server.
1 code implementation • 25 Feb 2024 • Jiabao Ji, Bairu Hou, Alexander Robey, George J. Pappas, Hamed Hassani, Yang Zhang, Eric Wong, Shiyu Chang
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks, which bypass the safeguards of targeted LLMs and fool them into generating objectionable content.
no code implementations • 19 Feb 2024 • Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra
Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed updates under Markovian sampling.
no code implementations • 8 Feb 2024 • Ingvar Ziemann, Stephen Tu, George J. Pappas, Nikolai Matni
In this work, we study statistical learning with dependent ($\beta$-mixing) data and square loss in a hypothesis class $\mathscr{F}\subset L_{\Psi_p}$ where $\Psi_p$ is the norm $\|f\|_{\Psi_p} \triangleq \sup_{m\geq 1} m^{-1/p} \|f\|_{L^m} $ for some $p\in [2,\infty]$.
1 code implementation • 26 Jan 2024 • Shuo Yang, Yu Chen, Xiang Yin, George J. Pappas, Rahul Mangharam
Our approach is computationally efficient, minimally invasive to any reference controller, and applicable to large-scale systems.
1 code implementation • 12 Dec 2023 • Renukanandan Tumu, Matthew Cleaveland, Rahul Mangharam, George J. Pappas, Lars Lindemann
While prior work has gone into creating score functions that produce multi-model prediction regions, such regions are generally too complex for use in downstream planning and control problems.
no code implementations • 11 Dec 2023 • Thomas Waite, Alexander Robey, Hassani Hamed, George J. Pappas, Radoslav Ivanov
This paper addresses the problem of data-driven modeling and verification of perception-based autonomous systems.
1 code implementation • 12 Oct 2023 • Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, Eric Wong
PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention.
1 code implementation • 5 Oct 2023 • Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas
Despite efforts to align large language models (LLMs) with human intentions, widely-used LLMs such as GPT, Llama, and Claude are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content.
no code implementations • 28 Sep 2023 • Charis Stamouli, Evangelos Chatzipantazis, George J. Pappas
We empirically show that even though too loose to be used as absolute estimates, our SRM bounds on the true prediction error are able to track its relative behavior across different model classes of the hierarchy.
no code implementations • 7 Sep 2023 • Ingvar Ziemann, Anastasios Tsiamis, Bruce Lee, Yassir Jedra, Nikolai Matni, George J. Pappas
This tutorial serves as an introduction to recently developed non-asymptotic methods in the theory of -- mainly linear -- system identification.
1 code implementation • 16 Aug 2023 • Shaoru Chen, Kong Yao Chee, Nikolai Matni, M. Ani Hsieh, George J. Pappas
With the increase in data availability, it has been widely demonstrated that neural networks (NN) can capture complex system dynamics precisely in a data-driven manner.
no code implementations • 19 Jun 2023 • Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher
One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data.
no code implementations • 8 Jun 2023 • Alëna Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas
Many modern autonomous systems, particularly multi-agent systems, are time-critical and need to be robust against timing uncertainties.
no code implementations • 15 May 2023 • Thomas Beckers, Qirui Wu, George J. Pappas
Variational autoencoders allow to learn a lower-dimensional latent space based on high-dimensional input/output data.
no code implementations • 15 May 2023 • Thomas Beckers, Tom Z. Jiahao, George J. Pappas
Switching physical systems are ubiquitous in modern control applications, for instance, locomotion behavior of robots and animals, power converters with switches and diodes.
no code implementations • 15 May 2023 • Thomas Beckers, Jacob Seidman, Paris Perdikaris, George J. Pappas
Data-driven approaches achieve remarkable results for the modeling of complex dynamics based on collected data.
no code implementations • 14 May 2023 • Nicolò Dal Fabbro, Aritra Mitra, George J. Pappas
Federated learning (FL) has recently gained much attention due to its effectiveness in speeding up supervised learning tasks under communication and privacy constraints.
1 code implementation • 3 Apr 2023 • Matthew Cleaveland, Insup Lee, George J. Pappas, Lars Lindemann
In fact, to obtain prediction regions over $T$ time steps with confidence $1-\delta$, {previous works require that each individual prediction region is valid} with confidence $1-\delta/T$.
no code implementations • 1 Apr 2023 • Shuo Yang, George J. Pappas, Rahul Mangharam, Lars Lindemann
However, these perception maps are not perfect and result in state estimation errors that can lead to unsafe system behavior.
no code implementations • 20 Feb 2023 • Jacob H. Seidman, Georgios Kissas, George J. Pappas, Paris Perdikaris
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems.
no code implementations • 4 Feb 2023 • Han Wang, Aritra Mitra, Hamed Hassani, George J. Pappas, James Anderson
We initiate the study of federated reinforcement learning under environmental heterogeneity by considering a policy evaluation problem.
no code implementations • 27 Jan 2023 • Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Ioannis G. Kevrekidis, Mahyar Fazlyab
Neural networks are known to be vulnerable to adversarial attacks, which are small, imperceptible perturbations that can significantly alter the network's output.
no code implementations • 3 Jan 2023 • Aritra Mitra, George J. Pappas, Hamed Hassani
These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays.
no code implementations • 3 Nov 2022 • Lars Lindemann, Xin Qin, Jyotirmoy V. Deshmukh, George J. Pappas
The second algorithm constructs prediction regions for future system states first, and uses these to obtain a prediction region for the satisfaction measure.
no code implementations • 24 Sep 2022 • Mariliza Tzes, Nikolaos Bousias, Evangelos Chatzipantazis, George J. Pappas
This paper addresses the Multi-Robot Active Information Acquisition (AIA) problem, where a team of mobile robots, communicating through an underlying graph, estimates a hidden state expressing a phenomenon of interest.
no code implementations • 12 Sep 2022 • Anastasios Tsiamis, Ingvar Ziemann, Nikolai Matni, George J. Pappas
This tutorial survey provides an overview of recent non-asymptotic advances in statistical learning theory as relevant to control and system identification.
no code implementations • 12 Sep 2022 • Anastasia Impicciatore, Anastasios Tsiamis, Yuriy Zacchia Lun, Alessandro D'Innocenzo, George J. Pappas
This note studies state estimation in wireless networked control systems with secrecy against eavesdropping.
2 code implementations • 20 Jul 2022 • Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf
By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.
no code implementations • 7 Jun 2022 • Jacob H. Seidman, Georgios Kissas, Paris Perdikaris, George J. Pappas
Supervised learning in function spaces is an emerging area of machine learning research with applications to the prediction of complex physical systems such as fluid flows, solid mechanics, and climate modeling.
no code implementations • 6 Jun 2022 • Aritra Mitra, Arman Adibi, George J. Pappas, Hamed Hassani
We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret.
no code implementations • 28 May 2022 • Lars Lindemann, Lejun Jiang, Nikolai Matni, George J. Pappas
For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly.
no code implementations • 27 May 2022 • Anastasios Tsiamis, Ingvar Ziemann, Manfred Morari, Nikolai Matni, George J. Pappas
In this paper, we study the statistical difficulty of learning to control linear systems.
no code implementations • 7 Apr 2022 • Arman Adibi, Aritra Mitra, George J. Pappas, Hamed Hassani
Recent years have witnessed a growing interest in the topic of min-max optimization, owing to its relevance in the context of generative adversarial networks (GANs), robust control and optimization, and reinforcement learning.
no code implementations • 3 Apr 2022 • Charis Stamouli, Anastasios Tsiamis, Manfred Morari, George J. Pappas
Then, we employ this benchmark controller to derive a novel robustly stable adaptive SMPC scheme that learns the necessary noise statistics online, while guaranteeing time-uniform satisfaction of the unknown reformulated state constraints with high probability.
1 code implementation • 2 Apr 2022 • Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, Rajeev Alur
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
no code implementations • 29 Mar 2022 • Alëna Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas
We study the temporal robustness of temporal logic specifications and show how to design temporally robust control laws for time-critical control systems.
1 code implementation • ICLR 2022 • Allan Zhou, Fahim Tajwar, Alexander Robey, Tom Knowles, George J. Pappas, Hamed Hassani, Chelsea Finn
Based on this analysis, we show how a generative approach for learning the nuisance transformations can help transfer invariances across classes and improve performance on a set of imbalanced image classification benchmarks.
Ranked #23 on
Long-tail Learning
on CIFAR-10-LT (ρ=100)
no code implementations • 2 Mar 2022 • Aritra Mitra, Hamed Hassani, George J. Pappas
Specifically, in our setup, an agent interacting with an environment transmits encoded estimates of an unknown model parameter to a server over a communication channel of finite capacity.
1 code implementation • 5 Feb 2022 • Lars Lindemann, Alena Rodionova, George J. Pappas
We then define the temporal robustness risk by investigating the temporal robustness of the realizations of a stochastic signal.
1 code implementation • 2 Feb 2022 • Alexander Robey, Luiz F. O. Chamon, George J. Pappas, Hamed Hassani
From a theoretical point of view, this framework overcomes the trade-offs between the performance and the sample-complexity of worst-case and average-case learning.
1 code implementation • 4 Jan 2022 • Georgios Kissas, Jacob Seidman, Leonardo Ferreira Guilhoto, Victor M. Preciado, George J. Pappas, Paris Perdikaris
Supervised operator learning is an emerging machine learning paradigm with applications to modeling the evolution of spatio-temporal dynamical systems and approximating general black-box relationships between functional data.
no code implementations • 14 Dec 2021 • Manuela Gamonal, Thomas Beckers, George J. Pappas, Leonardo J. Colombo
We provide a decentralized control law that exponentially stabilizes the motion of the agents and captures Reynolds boids motion for swarms by using GPs as an online learning-based oracle for the prediction of the unknown dynamics.
no code implementations • NeurIPS 2021 • Alexander Robey, Luiz F. O. Chamon, George J. Pappas, Hamed Hassani, Alejandro Ribeiro
In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions, which we characterize completely.
no code implementations • 2 Oct 2021 • Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado
Estimating the region of attraction (ROA) of general nonlinear autonomous systems remains a challenging problem and requires a case-by-case analysis.
no code implementations • 30 Aug 2021 • Lars Lindemann, George J. Pappas, Dimos V. Dimarogonas
Addressing these is pivotal to build fully autonomous systems and requires a systematic integration of planning and control.
1 code implementation • NeurIPS 2021 • Wanxin Jin, Shaoshuai Mou, George J. Pappas
We propose a Safe Pontryagin Differentiable Programming (Safe PDP) methodology, which establishes a theoretical and algorithmic framework to solve a broad class of safety-critical learning and control tasks -- problems that require the guarantee of safety constraint satisfaction at any stage of the learning and control progress.
1 code implementation • 6 Apr 2021 • Alena Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas
We present a robust control framework for time-critical systems in which satisfying real-time constraints robustly is of utmost importance for the safety of the system.
no code implementations • 3 Apr 2021 • Lars Lindemann, Nikolai Matni, George J. Pappas
We then define the risk of a stochastic process not satisfying an STL formula robustly, referred to as the STL robustness risk.
no code implementations • 2 Apr 2021 • Anastasios Tsiamis, George J. Pappas
Statistically easy to learn linear system classes have sample complexity that is polynomial with the system dimension.
1 code implementation • NeurIPS 2021 • Alexander Robey, George J. Pappas, Hamed Hassani
Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data.
no code implementations • NeurIPS 2021 • Aritra Mitra, Rayana Jaafar, George J. Pappas, Hamed Hassani
We consider a standard federated learning (FL) architecture where a group of clients periodically coordinate with a central server to train a statistical model.
no code implementations • 22 Dec 2020 • Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado
By designing the learner and the verifier according to the analytic center cutting-plane method from convex optimization, we show that when the set of Lyapunov functions is full-dimensional in the parameter space, our method finds a Lyapunov function in a finite number of steps.
Optimization and Control
1 code implementation • 22 Dec 2020 • Erfan Nozari, Maxwell A. Bertolero, Jennifer Stiso, Lorenzo Caciagli, Eli J. Cornblath, Xiaosong He, Arun S. Mahadevan, George J. Pappas, Dani Smith Bassett
Contrary to our expectations, linear auto-regressive models achieve the best measures across all three metrics, eliminating the trade-off between accuracy and simplicity.
1 code implementation • 16 Nov 2020 • Christopher D. Hsu, Heejin Jeong, George J. Pappas, Pratik Chaudhari
Our method can handle an arbitrary number of pursuers and targets; we show results for tasks consisting up to 1000 pursuers tracking 1000 targets.
Multi-agent Reinforcement Learning
reinforcement-learning
+2
no code implementations • 12 Oct 2020 • Pushpak Jagtap, George J. Pappas, Majid Zamani
This paper focuses on the controller synthesis for unknown, nonlinear systems while ensuring safety constraints.
no code implementations • 14 Sep 2020 • Thomas Beckers, Leonardo Colombo, Sandra Hirche, George J. Pappas
To overcome this issue, we present a tracking control law for underactuated rigid-body dynamics using an online learning-based oracle for the prediction of the unknown dynamics.
1 code implementation • 17 Jun 2020 • Heejin Jeong, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas
In particular, we introduce Active Tracking Target Network (ATTN), a unified RL policy that is capable of solving major sub-tasks of active target tracking -- in-sight tracking, navigation, and exploration.
no code implementations • 12 Jun 2020 • Harshat Kumar, Dionysios S. Kalogerias, George J. Pappas, Alejandro Ribeiro
Deterministic Policy Gradient (DPG) removes a level of randomness from standard randomized-action Policy Gradient (PG), and demonstrates substantial empirical success for tackling complex dynamic problems involving Markov decision processes.
1 code implementation • 20 May 2020 • Alexander Robey, Hamed Hassani, George J. Pappas
Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning.
no code implementations • 3 May 2020 • Jason Z. Kim, Zhixin Lu, Erfan Nozari, George J. Pappas, Danielle S. Bassett
Here we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory.
no code implementations • L4DC 2020 • Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas
By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly.
1 code implementation • 16 Apr 2020 • Haimin Hu, Mahyar Fazlyab, Manfred Morari, George J. Pappas
There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation.
no code implementations • L4DC 2020 • Anastasios Tsiamis, Nikolai Matni, George J. Pappas
We show that when the system identification step produces sufficiently accurate estimates, or when the underlying true KF is sufficiently robust, that a Certainty Equivalent (CE) KF, i. e., one designed using the estimated parameters directly, enjoys provable sub-optimality guarantees.
no code implementations • 6 Dec 2019 • Dionysios S. Kalogerias, Luiz. F. O. Chamon, George J. Pappas, Alejandro Ribeiro
Despite the simplicity and intuitive interpretation of Minimum Mean Squared Error (MMSE) estimators, their effectiveness in certain scenarios is questionable.
no code implementations • 10 Nov 2019 • Dionysios S. Kalogerias, Mark Eisen, George J. Pappas, Alejandro Ribeiro
Upon further assuming the use of near-universal policy parameterizations, we also develop explicit bounds on the gap between optimal values of initial, infinite dimensional resource allocation problems, and dual values of their parameterized smoothed surrogates.
no code implementations • 8 Nov 2019 • Konstantinos Gatsis, George J. Pappas
In this regard our work is the first to characterize the amount of channel modeling that is required to answer such a question.
2 code implementations • 23 Oct 2019 • Heejin Jeong, Brent Schlotfeldt, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas
In this paper, we propose a novel Reinforcement Learning approach for solving the Active Information Acquisition problem, which requires an agent to choose a sequence of actions in order to acquire information about a process of interest using on-board sensors.
1 code implementation • 9 Oct 2019 • Mahyar Fazlyab, Manfred Morari, George J. Pappas
In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output.
no code implementations • 2 Oct 2019 • Lifeng Zhou, Vasileios Tzoumas, George J. Pappas, Pratap Tokekar
Since, DRM overestimates the number of attacks in each clique, in this paper we also introduce an Improved Distributed Robust Maximization (IDRM) algorithm.
no code implementations • 30 Sep 2019 • Alexander Robey, Arman Adibi, Brent Schlotfeldt, George J. Pappas, Hamed Hassani
Given this distributed setting, we develop Constraint-Distributed Continuous Greedy (CDCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication.
1 code implementation • 11 Sep 2019 • Mohammadhosein Hasanbeig, Yiannis Kantaros, Alessandro Abate, Daniel Kroening, George J. Pappas, Insup Lee
Reinforcement Learning (RL) has emerged as an efficient method of choice for solving complex sequential decision making problems in automatic control, computer science, economics, and biology.
1 code implementation • NeurIPS 2019 • Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas
The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).
no code implementations • 21 Mar 2019 • Anastasios Tsiamis, George J. Pappas
In this paper, we analyze the finite sample complexity of stochastic system identification using modern tools from machine learning and statistics.
4 code implementations • 4 Mar 2019 • Mahyar Fazlyab, Manfred Morari, George J. Pappas
Certifying the safety or robustness of neural networks against input uncertainties and adversarial attacks is an emerging challenge in the area of safe machine learning and control.
1 code implementation • 12 Feb 2019 • Cassiano O. Becker, Sérgio Pequito, George J. Pappas, Victor M. Preciado
In this setting, we first consider a feasibility problem consisting of tuning the edge weights such that certain controllability properties are satisfied.
Optimization and Control
1 code implementation • 5 Nov 2018 • Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee
This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.
Systems and Control
1 code implementation • 7 Sep 2018 • Andreea B. Alexandru, Konstantinos Gatsis, Yasser Shoukry, Sanjit A. Seshia, Paulo Tabuada, George J. Pappas
The development of large-scale distributed control systems has led to the outsourcing of costly computations to cloud-computing platforms, as well as to concerns about privacy of the collected sensitive data.
Optimization and Control Cryptography and Security Systems and Control
no code implementations • 2 Apr 2018 • Vasileios Tzoumas, Ali Jadbabaie, George J. Pappas
The objective of this paper is to focus on resilient matroid-constrained problems arising in control and sensing but in the presence of sensor and actuator failures.
1 code implementation • 27 Mar 2018 • Andreea B. Alexandru, Manfred Morari, George J. Pappas
We propose protocols for two cloud-MPC architectures motivated by the current developments in the Internet of Things: a client-server architecture and a two-server architecture.
Optimization and Control Cryptography and Security Systems and Control
no code implementations • 26 Mar 2018 • Brent Schlotfeldt, Vasileios Tzoumas, Dinesh Thakur, George J. Pappas
In this paper, we provide the first algorithm, enabling the following capabilities: minimal communication, i. e., the algorithm is executed by the robots based only on minimal communication between them; system-wide resiliency, i. e., the algorithm is valid for any number of denial-of-service attacks and failures; and provable approximation performance, i. e., the algorithm ensures for all monotone (and not necessarily submodular) objective functions a solution that is finitely close to the optimal.
no code implementations • 21 Mar 2018 • Vasileios Tzoumas, Ali Jadbabaie, George J. Pappas
In this paper, we provide the first scalable algorithm, that achieves the following characteristics: system-wide resiliency, i. e., the algorithm is valid for any number of denial-of-service attacks, deletions, or failures; adaptiveness, i. e., at each time step, the algorithm selects system elements based on the history of inflicted attacks, deletions, or failures; and provable approximation performance, i. e., the algorithm guarantees for monotone objective functions a solution close to the optimal.
no code implementations • 23 Jan 2018 • Ke Sun, Kelsey Saulnier, Nikolay Atanasov, George J. Pappas, Vijay Kumar
Many accurate and efficient methods exist that address this problem but most assume that the occupancy states of different elements in the map representation are statistically independent.
Robotics
1 code implementation • 9 Dec 2017 • Heejin Jeong, Clark Zhang, George J. Pappas, Daniel D. Lee
We formulate an efficient closed-form solution for the value update by approximately estimating analytic parameters of the posterior of the Q-beliefs.
no code implementations • 1 Apr 2014 • Menglong Zhu, Nikolay Atanasov, George J. Pappas, Kostas Daniilidis
This paper presents an active approach for part-based object detection, which optimizes the order of part filter evaluations and the time at which to stop and make a prediction.
no code implementations • 20 Sep 2013 • Nikolay Atanasov, Bharath Sankaran, Jerome Le Ny, George J. Pappas, Kostas Daniilidis
One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose.