Search Results for author: Ting-Jui Chang

Found 7 papers, 0 papers with code

Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

no code implementations23 Feb 2023 Ting-Jui Chang, Sapana Chaudhary, Dileep Kalathil, Shahin Shahrampour

We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of $O(T^{2/3} \sqrt{\log T} + T^{1/3}C_T^*)$, where $C_T^*$ denotes the path-length of the best minimizer sequence.

Distributed Online System Identification for LTI Systems Using Reverse Experience Replay

no code implementations3 Jul 2022 Ting-Jui Chang, Shahin Shahrampour

Inspired by this work, we study distributed online system identification of LTI systems over a multi-agent network.

Regret Analysis of Distributed Online LQR Control for Unknown LTI Systems

no code implementations15 May 2021 Ting-Jui Chang, Shahin Shahrampour

Consider a multi-agent network where each agent is modeled as a LTI system.

Distributed Online Linear Quadratic Control for Linear Time-invariant Systems

no code implementations29 Sep 2020 Ting-Jui Chang, Shahin Shahrampour

Recent advancement in online optimization and control has provided novel tools to study LQ problems that are robust to time-varying cost parameters.

Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

no code implementations6 Jun 2020 Ting-Jui Chang, Shahin Shahrampour

The regret bound of dynamic online learning algorithms is often expressed in terms of the variation in the function sequence ($V_T$) and/or the path-length of the minimizer sequence after $T$ rounds.

RFN: A Random-Feature Based Newton Method for Empirical Risk Minimization in Reproducing Kernel Hilbert Spaces

no code implementations12 Feb 2020 Ting-Jui Chang, Shahin Shahrampour

Large-scale finite-sum problems can be solved using efficient variants of Newton method, where the Hessian is approximated via sub-samples of data.

Efficient Two-Step Adversarial Defense for Deep Neural Networks

no code implementations ICLR 2019 Ting-Jui Chang, Yukun He, Peng Li

However, the computational cost of the adversarial training with PGD and other multi-step adversarial examples is much higher than that of the adversarial training with other simpler attack techniques.

Adversarial Defense Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.