no code implementations • 13 Mar 2024 • Ting-Jui Chang, Shahin Shahrampour
In this work, we study online linear quadratic Gaussian problems with a given linear constraint imposed on the controller.
no code implementations • 4 Oct 2023 • Ting-Jui Chang, Shahin Shahrampour
For the unknown dynamics case, we design a distributed explore-then-commit approach, where in the exploration phase all agents jointly learn the system dynamics, and in the learning phase our proposed control algorithm is applied using each agent system estimate.
no code implementations • 23 Feb 2023 • Ting-Jui Chang, Sapana Chaudhary, Dileep Kalathil, Shahin Shahrampour
We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of $O(T^{2/3} \sqrt{\log T} + T^{1/3}C_T^*)$, where $C_T^*$ denotes the path-length of the best minimizer sequence.
no code implementations • 3 Jul 2022 • Ting-Jui Chang, Shahin Shahrampour
Inspired by this work, we study distributed online system identification of LTI systems over a multi-agent network.
no code implementations • 15 May 2021 • Ting-Jui Chang, Shahin Shahrampour
Consider a multi-agent network where each agent is modeled as a LTI system.
no code implementations • 29 Sep 2020 • Ting-Jui Chang, Shahin Shahrampour
Recent advancement in online optimization and control has provided novel tools to study LQ problems that are robust to time-varying cost parameters.
no code implementations • 6 Jun 2020 • Ting-Jui Chang, Shahin Shahrampour
The regret bound of dynamic online learning algorithms is often expressed in terms of the variation in the function sequence ($V_T$) and/or the path-length of the minimizer sequence after $T$ rounds.
no code implementations • 12 Feb 2020 • Ting-Jui Chang, Shahin Shahrampour
Large-scale finite-sum problems can be solved using efficient variants of Newton method, where the Hessian is approximated via sub-samples of data.
no code implementations • ICLR 2019 • Ting-Jui Chang, Yukun He, Peng Li
However, the computational cost of the adversarial training with PGD and other multi-step adversarial examples is much higher than that of the adversarial training with other simpler attack techniques.