1 code implementation • 14 Sep 2023 • Zhiheng Xi, Wenxiang Chen, Xin Guo, wei he, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks.
no code implementations • 4 Sep 2023 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Yuanxi Li, Qing Ye, Jiancheng Lv
However, a significant challenge arises when coordinating FL with crowd intelligence which diverse client groups possess disparate objectives due to data heterogeneity or distinct tasks.
1 code implementation • 11 Jul 2023 • Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model.
1 code implementation • 23 May 2023 • Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang
For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8. 0\%$ on GSM8K and $17. 8\%$ on MultiArith; it also improves the performance of CoT by $6. 0\%$ on GSM8K and $6. 0\%$ on MathQA, respectively.
no code implementations • 27 Feb 2023 • Yuhao Zhou, Mingjia Shi, Yuanxi Li, Qing Ye, Yanan sun, Jiancheng Lv
Reducing communication overhead in federated learning (FL) is challenging but crucial for large-scale distributed privacy-preserving machine learning.
no code implementations • 19 Nov 2022 • Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv
Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.
Ranked #1 on
Image Classification
on Fashion-MNIST
(Accuracy metric)
1 code implementation • ACL 2022 • Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.
1 code implementation • 2 Nov 2022 • Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu
The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples.
1 code implementation • 2 Jun 2022 • Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu
In this work, we propose an exact formulation of the solution of diffusion ODEs.
1 code implementation • 22 May 2022 • Ziyu Wang, Yuhao Zhou, Jun Zhu
We investigate nonlinear instrumental variable (IV) regression given high-dimensional instruments.
no code implementations • 6 Apr 2022 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.
no code implementations • 28 Feb 2022 • Yuchi Zhao, Yuhao Zhou
We propose Fuse Local and Global Semantics in Representation Learning (FLAGS) to generate richer representations.
1 code implementation • 19 Feb 2022 • Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey
Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems.
no code implementations • 19 Aug 2021 • Yuhao Zhou, Huanhuan Fan, Shuang Gao, Yuchen Yang, Xudong Zhang, Jijunnan Li, Yandong Guo
The localization pipeline is designed as a coarse-to-fine paradigm.
1 code implementation • NeurIPS 2021 • Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu
Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.
no code implementations • NeurIPS 2021 • Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu
Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.
1 code implementation • 21 Apr 2021 • Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv
With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.
1 code implementation • 12 Dec 2020 • Yuhao Zhou, Ye Qing, Jiancheng Lv
Petabytes of data are generated each day by emerging Internet of Things (IoT), but only few of them can be finally collected and used for Machine Learning (ML) purposes due to the apprehension of data & privacy leakage, which seriously retarding ML's growth.
no code implementations • 11 Dec 2020 • Yuhao Zhou, Ruijie Wang, Yi-Cheng Zhang, An Zeng, Matúš Medo
We propose a new PageRank variant which outperforms PageRank in all evaluated settings, yet shares its sensitivity to increased randomness in the data.
1 code implementation • 6 Sep 2020 • Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv
While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.
1 code implementation • 23 Jul 2020 • Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv
Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.
no code implementations • 25 May 2020 • Huanhuan Fan, Yuhao Zhou, Ang Li, Shuang Gao, Jijunnan Li, Yandong Guo
In this paper, we propose a monocular visual localization pipeline leveraging semantic and depth cues.
no code implementations • CVPR 2020 • Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, Sanja Fidler
Simulation is a crucial component of any robotic system.
1 code implementation • ICML 2020 • Yuhao Zhou, Jiaxin Shi, Jun Zhu
Estimating the score, i. e., the gradient of log density function, from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models that involve flexible yet intractable densities.
1 code implementation • 12 Jun 2019 • Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba
To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space.
no code implementations • ICLR 2019 • Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba
To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space.
1 code implementation • 3 Nov 2018 • Samvit Jain, Xun Zhang, Yuhao Zhou, Ganesh Ananthanarayanan, Junchen Jiang, Yuanchao Shu, Joseph Gonzalez
Enterprises are increasingly deploying large camera networks for video analytics.
no code implementations • CVPR 2018 • Yuhao Zhou, Makarand Tapaswi, Sanja Fidler
We are interested in enabling automatic 4D cinema by parsing physical and special effects from untrimmed movies.
1 code implementation • 18 Sep 2017 • Jiaxin Shi, Jianfei Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, Yuhao Zhou
In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning.