1 code implementation • NeurIPS 2023 • Yangdi Jiang, Xiaotian Chang, Yi Liu, Lei Ding, Linglong Kong, Bei Jiang
We develop an advanced approach for extending Gaussian Differential Privacy (GDP) to general Riemannian manifolds.
no code implementations • 31 Oct 2022 • Dongcui Diao, Hengshuai Yao, Bei Jiang
Recognizing and telling similar objects apart is even hard for human beings.
1 code implementation • 5 Oct 2022 • Meichen Liu, Lei Ding, Dengdeng Yu, Wulong Liu, Linglong Kong, Bei Jiang
To fulfill great needs and advocate the significance of quantile fairness, we propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity with respect to sensitive attributes, such as race or gender, and thereby derive a reliable fair prediction interval.
no code implementations • 29 Sep 2022 • Ke Sun, Bei Jiang, Linglong Kong
We consider the problem of learning a set of probability distributions from the Bellman dynamics in distributional reinforcement learning~(RL) that learns the whole return distribution compared with only its expectation in classical RL.
Distributional Reinforcement Learning reinforcement-learning +1
1 code implementation • 20 May 2022 • Xing Chen, Dongcui Diao, Hechang Chen, Hengshuai Yao, Haiyin Piao, Zhixiao Sun, Zhiwei Yang, Randy Goebel, Bei Jiang, Yi Chang
The popular Proximal Policy Optimization (PPO) algorithm approximates the solution in a clipped policy space.
no code implementations • 1 Feb 2022 • Ke Sun, Yingnan Zhao, Wulong Liu, Bei Jiang, Linglong Kong
The empirical success of distributional reinforcement learning~(RL) highly depends on the distribution representation and the choice of distribution divergence.
1 code implementation • 9 Dec 2021 • Lei Ding, Dengdeng Yu, Jinhan Xie, Wenxing Guo, Shenggang Hu, Meichen Liu, Linglong Kong, Hongsheng Dai, Yanchun Bao, Bei Jiang
The proposed method allows us to construct and analyze the complex causal mechanisms facilitating gender information flow while retaining oracle semantic information within word embeddings.
no code implementations • NeurIPS 2021 • Ke Sun, Yafei Wang, Yi Liu, Yingnan Zhao, Bo Pan, Shangling Jui, Bei Jiang, Linglong Kong
Anderson mixing has been heuristically applied to reinforcement learning (RL) algorithms for accelerating convergence and improving the sampling efficiency of deep RL.
no code implementations • 7 Oct 2021 • Ke Sun, Yingnan Zhao, Enze Shi, Yafei Wang, Xiaodong Yan, Bei Jiang, Linglong Kong
The theoretical advantages of distributional reinforcement learning~(RL) over classical RL remain elusive despite its remarkable empirical performance.
no code implementations • 29 Sep 2021 • Yi Liu, Ke Sun, Bei Jiang, Linglong Kong
Gaussian differential privacy (GDP) is a single-parameter family of privacy notions that provides coherent guarantees to avoid the exposure of individuals from machine learning models.
no code implementations • 29 Sep 2021 • Ke Sun, Yingnan Zhao, Yi Liu, Enze Shi, Yafei Wang, Aref Sadeghi, Xiaodong Yan, Bei Jiang, Linglong Kong
Distributional reinforcement learning~(RL) is a class of state-of-the-art algorithms that estimate the whole distribution of the total return rather than only its expectation.
1 code implementation • 31 May 2021 • Chenglin Li, Di Niu, Bei Jiang, Xiao Zuo, Jianming Yang
However, the effectiveness of federated learning for HAR is affected by the fact that each user has different activity types and even a different signal distribution for the same activity type.
no code implementations • 31 May 2021 • Chenglin Li, Carrie Lu Tong, Di Niu, Bei Jiang, Xiao Zuo, Lei Cheng, Jian Xiong, Jianming Yang
Deep learning models for human activity recognition (HAR) based on sensor data have been heavily studied recently.
no code implementations • 27 Apr 2018 • Donglai Zhu, Hengshuai Yao, Bei Jiang, Peng Yu
In deep neural network, the cross-entropy loss function is commonly used for classification.