Search Results for author: Yuhao Zhou

Found 29 papers, 17 papers with code

Federated cINN Clustering for Accurate Clustered Federated Learning

no code implementations4 Sep 2023 Yuhao Zhou, Minjia Shi, Yuxin Tian, Yuanxi Li, Qing Ye, Jiancheng Lv

However, a significant challenge arises when coordinating FL with crowd intelligence which diverse client groups possess disparate objectives due to data heterogeneity or distinct tasks.

Clustering Federated Learning +1

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

1 code implementation23 May 2023 Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang

For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8. 0\%$ on GSM8K and $17. 8\%$ on MultiArith; it also improves the performance of CoT by $6. 0\%$ on GSM8K and $6. 0\%$ on MathQA, respectively.

GSM8K

Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence

no code implementations27 Feb 2023 Yuhao Zhou, Mingjia Shi, Yuanxi Li, Qing Ye, Yanan sun, Jiancheng Lv

Reducing communication overhead in federated learning (FL) is challenging but crucial for large-scale distributed privacy-preserving machine learning.

Federated Learning Privacy Preserving

Personalized Federated Learning with Hidden Information on Personalized Prior

no code implementations19 Nov 2022 Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv

Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.

 Ranked #1 on Image Classification on Fashion-MNIST (Accuracy metric)

Classification Image Classification +2

Robust Lottery Tickets for Pre-trained Language Models

1 code implementation ACL 2022 Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang

Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.

Adversarial Robustness

DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models

1 code implementation2 Nov 2022 Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu

The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples.

Fast Instrument Learning with Faster Rates

1 code implementation22 May 2022 Ziyu Wang, Yuhao Zhou, Jun Zhu

We investigate nonlinear instrumental variable (IV) regression given high-dimensional instruments.

Model Selection regression

DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg

no code implementations6 Apr 2022 Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv

Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.

Federated Learning

Fuse Local and Global Semantics in Representation Learning

no code implementations28 Feb 2022 Yuchi Zhao, Yuhao Zhou

We propose Fuse Local and Global Semantics in Representation Learning (FLAGS) to generate richer representations.

Representation Learning

Gradient Estimation with Discrete Stein Operators

1 code implementation19 Feb 2022 Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey

Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems.

Quasi-Bayesian Dual Instrumental Variable Regression

1 code implementation NeurIPS 2021 Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu

Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.

Bayesian Inference regression

Scalable Quasi-Bayesian Inference for Instrumental Variable Regression

no code implementations NeurIPS 2021 Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu

Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.

Bayesian Inference regression

LANA: Towards Personalized Deep Knowledge Tracing Through Distinguishable Interactive Sequences

1 code implementation21 Apr 2021 Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv

With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.

Knowledge Tracing

Communication-Efficient Federated Learning with Compensated Overlap-FedAvg

1 code implementation12 Dec 2020 Yuhao Zhou, Ye Qing, Jiancheng Lv

Petabytes of data are generated each day by emerging Internet of Things (IoT), but only few of them can be finally collected and used for Machine Learning (ML) purposes due to the apprehension of data & privacy leakage, which seriously retarding ML's growth.

Data Compression Federated Learning

Limits of PageRank-based ranking methods in sports data

no code implementations11 Dec 2020 Yuhao Zhou, Ruijie Wang, Yi-Cheng Zhang, An Zeng, Matúš Medo

We propose a new PageRank variant which outperforms PageRank in all evaluated settings, yet shares its sensitivity to increased randomness in the data.

HPSGD: Hierarchical Parallel SGD With Stale Gradients Featuring

1 code implementation6 Sep 2020 Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv

While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.

DBS: Dynamic Batch Size For Distributed Deep Neural Network Training

1 code implementation23 Jul 2020 Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv

Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.

Nonparametric Score Estimators

1 code implementation ICML 2020 Yuhao Zhou, Jiaxin Shi, Jun Zhu

Estimating the score, i. e., the gradient of log density function, from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models that involve flexible yet intractable densities.

Neural Graph Evolution: Towards Efficient Automatic Robot Design

1 code implementation12 Jun 2019 Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba

To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space.

Neural Graph Evolution: Automatic Robot Design

no code implementations ICLR 2019 Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba

To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space.

Now You Shake Me: Towards Automatic 4D Cinema

no code implementations CVPR 2018 Yuhao Zhou, Makarand Tapaswi, Sanja Fidler

We are interested in enabling automatic 4D cinema by parsing physical and special effects from untrimmed movies.

ZhuSuan: A Library for Bayesian Deep Learning

1 code implementation18 Sep 2017 Jiaxin Shi, Jianfei Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, Yuhao Zhou

In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning.

Probabilistic Programming regression

Cannot find the paper you are looking for? You can Submit a new open access paper.