no code implementations • 5 Dec 2024 • Jiayu Mao, Tongxin Yin, Aylin Yener, Mingyan Liu
We propose a wireless physical layer (PHY) design for OTA-FL which improves differential privacy (DP) through a decentralized, dynamic power control that utilizes both inherent Gaussian noise in the wireless channel and a cooperative jammer (CJ) for additional artificial noise generation when higher privacy levels are required.
1 code implementation • 10 Oct 2023 • Tongxin Yin, Xuwei Tan, Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
no code implementations • 9 Oct 2023 • Tongxin Yin, Jean-François Ton, Ruocheng Guo, Yuanshun Yao, Mingyan Liu, Yang Liu
To generalize the abstaining decisions to test samples, we then train a surrogate model to learn the abstaining decisions based on the IP solutions in an end-to-end manner.
1 code implementation • 7 Jun 2023 • Xiusi Chen, Wei-Yao Wang, Ziniu Hu, David Reynoso, Kun Jin, Mingyan Liu, P. Jeffrey Brantingham, Wei Wang
In this study, we formulate the sequential decision-making process as a conditional trajectory generation process.
no code implementations • 8 May 2023 • Kun Jin, Tongxin Yin, Zhongzhu Chen, Zeyu Sun, Xueru Zhang, Yang Liu, Mingyan Liu
We consider a federated learning (FL) system consisting of multiple clients and a server, where the clients aim to collaboratively learn a common decision model from their distributed data.
no code implementations • 1 Nov 2022 • Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, Dawn Song
By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process.
1 code implementation • 17 Jun 2021 • Yulong Cao*, Ningfei Wang*, Chaowei Xiao*, Dawei Yang*, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, Bo Li
In this paper, we present the first study of security issues of MSF-based perception in AD systems.
no code implementations • ICCV 2021 • MingJie Sun, Zichao Li, Chaowei Xiao, Haonan Qiu, Bhavya Kailkhura, Mingyan Liu, Bo Li
Specifically, EdgeNetRob and EdgeGANRob first explicitly extract shape structure features from a given image via an edge detection algorithm.
1 code implementation • NeurIPS 2020 • Xueru Zhang, Ruibo Tu, Yang Liu, Mingyan Liu, Hedvig Kjellström, Kun Zhang, Cheng Zhang
Our results show that static fairness constraints can either promote equality or exacerbate disparity depending on the driving factor of qualification transitions and the effect of sensitive attributes on feature distributions.
4 code implementations • NeurIPS 2020 • Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh
Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.
no code implementations • 14 Jan 2020 • Xueru Zhang, Mingyan Liu
However, in practice most decision-making processes are of a sequential nature, where decisions made in the past may have an impact on future data.
no code implementations • 8 Oct 2019 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
It can be shown that the privacy-accuracy tradeoff can be improved significantly compared with conventional ADMM.
1 code implementation • 21 Jul 2019 • Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, MingJie Sun, JinFeng Yi, Zijiang Yang, Mingyan Liu, Bo Li, Dawn Song
To the best of our knowledge, we are the first to apply adversarial attacks on DRL systems to physical robots.
no code implementations • 11 Jul 2019 • Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li
Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions.
no code implementations • NeurIPS 2019 • Xueru Zhang, Mohammad Mahdi Khalili, Cem Tekin, Mingyan Liu
Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run.
no code implementations • 19 Nov 2018 • Kaiqing Zhang, Yang Liu, Ji Liu, Mingyan Liu, Tamer Başar
This paper addresses the problem of distributed learning of average belief with sequential observations, in which a network of $n>1$ agents aim to reach a consensus on the average value of their beliefs, by exchanging information only with their neighbors.
no code implementations • CVPR 2019 • Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, Mingyan Liu
Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications.
no code implementations • ECCV 2018 • Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, Dawn Song
In this paper, we aim to characterize adversarial examples based on spatial context information in semantic segmentation.
no code implementations • 7 Oct 2018 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems.
no code implementations • ICML 2018 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion.
10 code implementations • ICLR 2018 • Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song
A challenge to explore adversarial robustness of neural networks on MNIST.
3 code implementations • ICLR 2018 • Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song
Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.
no code implementations • 4 Apr 2015 • Yang Liu, Mingyan Liu
A natural remedy is a learning framework, which has also been extensively studied in the same context, but a typical learning algorithm in this literature seeks only the best static policy, with performance measured by weak regret, rather than learning a good dynamic channel access policy.
no code implementations • 14 Sep 2013 • Yang Liu, Mingyan Liu
We analyze the following group learning problem in the context of opinion diffusion: Consider a network with $M$ users, each facing $N$ options.
no code implementations • 15 May 2013 • Cem Tekin, Mingyan Liu
In an online contract selection problem there is a seller which offers a set of contracts to sequentially arriving buyers whose types are drawn from an unknown distribution.
no code implementations • 20 Jul 2011 • Cem Tekin, Mingyan Liu
In an uncontrolled restless bandit problem, there is a finite set of arms, each of which when pulled yields a positive reward.
no code implementations • 14 Jul 2010 • Cem Tekin, Mingyan Liu
The player receives a state-dependent reward each time it plays an arm.