no code implementations • ICML 2020 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
1 code implementation • 9 Feb 2024 • Gongxi Zhu, Donghao Li, Hanlin Gu, Yuxing Han, Yuan YAO, Lixin Fan, Qiang Yang
Firstly, combining model information from multiple communication rounds (Multi-temporal) enhances the overall effectiveness of MIAs compared to utilizing model information from a single epoch.
no code implementations • 7 Nov 2023 • Haoran Li, Dadi Guo, Donghao Li, Wei Fan, Qi Hu, Xin Liu, Chunkit Chan, Duanyi Yao, Yangqiu Song
Lastly, P-Bench performs existing privacy attacks on LMs with pre-defined privacy objectives as the empirical evaluation results.
1 code implementation • 27 Sep 2023 • June M. Liu, Donghao Li, He Cao, Tianhe Ren, Zeyi Liao, Jiamin Wu
This paper presents ChatCounselor, a large language model (LLM) solution designed to provide mental health support.
no code implementations • 9 Jun 2023 • Donghao Li, Ruiquan Huang, Cong Shen, Jing Yang
This paper investigates conservative exploration in reinforcement learning where the performance of the learning agent is guaranteed to be above a certain threshold throughout the learning process.
no code implementations • 5 May 2023 • Liang Ding, Tianyang Hu, Jiahang Jiang, Donghao Li, Wenjia Wang, Yuan YAO
In this paper, we aim to bridge this gap by presenting a framework for random smoothing regularization that can adaptively and effectively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.
1 code implementation • 14 Feb 2022 • Donghao Li, Yang Cao, Yuan YAO
To further enhance the utility and address the label collapse issue when the mixup degree is large, we propose a Hierarchical sampling method to stratify the mixup samples on a small number of classes.
1 code implementation • 4 Jul 2020 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
no code implementations • 25 Sep 2019 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
1 code implementation • 23 May 2019 • Yanwei Fu, Chen Liu, Donghao Li, Zuyuan Zhong, Xinwei Sun, Jinshan Zeng, Yuan YAO
To fill in this gap, this paper proposes a new approach based on differential inclusions of inverse scale spaces, which generate a family of models from simple to complex ones along the dynamics via coupling a pair of parameters, such that over-parameterized deep models and their structural sparsity can be explored simultaneously.
no code implementations • ICLR 2019 • Yanwei Fu, Shun Zhang, Donghao Li, Xinwei Sun, xiangyang xue, Yuan YAO
This paper proposes a Pruning in Training (PiT) framework of learning to reduce the parameter size of networks.
no code implementations • 24 Apr 2019 • Yanwei Fu, Donghao Li, Xinwei Sun, Shun Zhang, Yizhou Wang, Yuan YAO
This paper proposes a novel Stochastic Split Linearized Bregman Iteration ($S^{2}$-LBI) algorithm to efficiently train the deep network.
no code implementations • 28 Nov 2017 • Di Yuan, Xiaohuan Lu, Donghao Li, Yingyi Liang, Xinming Zhang
Most of the correlation filter based tracking algorithms can achieve good performance and maintain fast computational speed.