no code implementations • 6 Jun 2023 • Xinbiao Wang, Yuxuan Du, Zhuozhuo Tu, Yong Luo, Xiao Yuan, DaCheng Tao
Recent progress has highlighted its positive impact on learning quantum dynamics, wherein the integration of entanglement into quantum operations or measurements of quantum machine learning (QML) models leads to substantial reductions in training data size, surpassing a specified prediction error threshold.
no code implementations • 10 May 2022 • Yuxuan Du, Zhuozhuo Tu, Bujiao Wu, Xiao Yuan, DaCheng Tao
We further employ these generalization bounds to exhibit potential advantages in quantum state preparation and Hamiltonian learning.
no code implementations • CVPR 2022 • Jiyang Guan, Zhuozhuo Tu, Ran He, DaCheng Tao
Deep neural networks have achieved impressive performance in a variety of tasks over the last decade, such as autonomous driving, face recognition, and medical diagnosis.
no code implementations • 12 Dec 2021 • Shiye Lei, Zhuozhuo Tu, Leszek Rutkowski, Feng Zhou, Li Shen, Fengxiang He, DaCheng Tao
Bayesian neural networks (BNNs) have become a principal approach to alleviate overconfident predictions in deep learning, but they often suffer from scaling issues due to a large number of distribution parameters.
no code implementations • 29 Sep 2021 • Zhuozhuo Tu, Zhiqiang Xu, Tairan Huang, DaCheng Tao, Ping Li
Federated Learning is a machine learning technique where a network of clients collaborates with a server to learn a centralized model while keeping data localized.
no code implementations • 1 Jan 2021 • Zhuozhuo Tu, Shan You, Tao Huang, DaCheng Tao
Wasserstein distributionally robust optimization (DRO) has recently received significant attention in machine learning due to its connection to generalization, robustness and regularization.
no code implementations • 1 Jan 2021 • Tao Huang, Shan You, Yibo Yang, Zhuozhuo Tu, Fei Wang, Chen Qian, ChangShui Zhang
Differentiable neural architecture search (NAS) has gained much success in discovering more flexible and diverse cell types.
no code implementations • 18 Nov 2020 • Tao Huang, Shan You, Yibo Yang, Zhuozhuo Tu, Fei Wang, Chen Qian, ChangShui Zhang
However, even for this consistent search, the searched cells often suffer from poor performance, especially for the supernet with fewer layers, as current DARTS methods are prone to wide and shallow cells, and this topology collapse induces sub-optimal searched cells.
no code implementations • ICLR 2020 • Zhuozhuo Tu, Fengxiang He, DaCheng Tao
We first present a new generalization bound for recurrent neural networks based on matrix 1-norm and Fisher-Rao norm.
no code implementations • NeurIPS 2019 • Zhuozhuo Tu, Jingwei Zhang, DaCheng Tao
Here we propose a general theoretical method for analyzing the risk bound in the presence of adversaries.