Search Results for author: Chenxi Yang

Found 7 papers, 2 papers with code

Deep Policy Optimization with Temporal Logic Constraints

no code implementations17 Apr 2024 Ameesh Shah, Cameron Voloshin, Chenxi Yang, Abhinav Verma, Swarat Chaudhuri, Sanjit A. Seshia

In our work, we consider the setting where the task is specified by an LTL objective and there is an additional scalar reward that we need to optimize.

Reinforcement Learning (RL)

Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization

1 code implementation18 Mar 2024 Yujia Liu, Chenxi Yang, Dingquan Li, Jianhao Ding, Tingting Jiang

To be specific, we present theoretical evidence showing that the magnitude of score changes is related to the $\ell_1$ norm of the model's gradient with respect to the input image.

Adversarial Robustness No-Reference Image Quality Assessment +1

Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method

no code implementations10 Jan 2024 Chenxi Yang, Yujia Liu, Dingquan Li, Tingting Jiang

Ensuring the robustness of NR-IQA methods is vital for reliable comparisons of different image processing techniques and consistent user experiences in recommendations.

No-Reference Image Quality Assessment NR-IQA

Adaptive Scheduling for Edge-Assisted DNN Serving

no code implementations19 Apr 2023 Jian He, Chenxi Yang, Zhaoyuan He, Ghufran Baig, Lili Qiu

Based on this observation, we first design a novel scheduling algorithm to exploit the batching benefits of all requests that run the same DNN.

Scheduling

Certifiably Robust Reinforcement Learning through Model-Based Abstract Interpretation

no code implementations26 Jan 2023 Chenxi Yang, Greg Anderson, Swarat Chaudhuri

In each learning iteration, it uses the current version of this model and an external abstract interpreter to construct a differentiable signal for provable robustness.

Adversarial Robustness reinforcement-learning +1

Safe Neurosymbolic Learning with Differentiable Symbolic Execution

2 code implementations NeurIPS Workshop AIPLANS 2021 Chenxi Yang, Swarat Chaudhuri

We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic, human-written code.

Cannot find the paper you are looking for? You can Submit a new open access paper.