no code implementations • 17 Apr 2024 • Ameesh Shah, Cameron Voloshin, Chenxi Yang, Abhinav Verma, Swarat Chaudhuri, Sanjit A. Seshia
In our work, we consider the setting where the task is specified by an LTL objective and there is an additional scalar reward that we need to optimize.
1 code implementation • 18 Mar 2024 • Yujia Liu, Chenxi Yang, Dingquan Li, Jianhao Ding, Tingting Jiang
To be specific, we present theoretical evidence showing that the magnitude of score changes is related to the $\ell_1$ norm of the model's gradient with respect to the input image.
Adversarial Robustness No-Reference Image Quality Assessment +1
no code implementations • 10 Jan 2024 • Chenxi Yang, Yujia Liu, Dingquan Li, Tingting Jiang
Ensuring the robustness of NR-IQA methods is vital for reliable comparisons of different image processing techniques and consistent user experiences in recommendations.
no code implementations • 13 Dec 2023 • Divyanshu Saxena, Nihal Sharma, Donghyun Kim, Rohit Dwivedula, Jiayi Chen, Chenxi Yang, Sriram Ravula, Zichao Hu, Aditya Akella, Sebastian Angel, Joydeep Biswas, Swarat Chaudhuri, Isil Dillig, Alex Dimakis, P. Brighten Godfrey, Daehyeok Kim, Chris Rossbach, Gang Wang
This paper lays down the research agenda for a domain-specific foundation model for operating systems (OSes).
no code implementations • 19 Apr 2023 • Jian He, Chenxi Yang, Zhaoyuan He, Ghufran Baig, Lili Qiu
Based on this observation, we first design a novel scheduling algorithm to exploit the batching benefits of all requests that run the same DNN.
no code implementations • 26 Jan 2023 • Chenxi Yang, Greg Anderson, Swarat Chaudhuri
In each learning iteration, it uses the current version of this model and an external abstract interpreter to construct a differentiable signal for provable robustness.
2 code implementations • NeurIPS Workshop AIPLANS 2021 • Chenxi Yang, Swarat Chaudhuri
We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic, human-written code.