Search Results for author: Yuhao Zhou

Found 41 papers, 24 papers with code

Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals

no code implementations24 Mar 2024 Rui Zheng, Yuhao Zhou, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang

We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear subspaces respectively with minimal overlap, and the classical low-dimensional subspace projection can suppress perturbation features out of the subspace of clean signals.

Adversarial Defense

A Survey on Temporal Knowledge Graph: Representation Learning and Applications

no code implementations2 Mar 2024 Li Cai, Xin Mao, Yuhao Zhou, Zhaoguang Long, Changxu Wu, Man Lan

Knowledge graph representation learning aims to learn low-dimensional vector embeddings for entities and relations in a knowledge graph.

Graph Representation Learning Knowledge Graphs

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning

1 code implementation8 Feb 2024 Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, wei he, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang

In this paper, we propose R$^3$: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models.

GSM8K reinforcement-learning +1

Compositional Inductive Invariant Based Verification of Neural Network Controlled Systems

1 code implementation17 Dec 2023 Yuhao Zhou, Stavros Tripakis

Verifying the inductiveness of a candidate inductive invariant in the context of NNCS is hard because of the scale and nonlinearity of neural networks.

LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin

1 code implementation15 Dec 2023 Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.

Language Modelling Multi-Task Learning +1

The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing

no code implementations2 Nov 2023 Shen Nie, Hanzhong Allan Guo, Cheng Lu, Yuhao Zhou, Chenyu Zheng, Chongxuan Li

We present a unified probabilistic formulation for diffusion-based image editing, where a latent variable is edited in a task-specific manner and generally deviates from the corresponding marginal distribution induced by the original stochastic or ordinary differential equation (SDE or ODE).

Image-to-Image Translation

PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning

1 code implementation13 Oct 2023 Mingjia Shi, Yuhao Zhou, Kai Wang, Huaizheng Zhang, Shudong Huang, Qing Ye, Jiangcheng Lv

Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data.

Federated Learning

Federated cINN Clustering for Accurate Clustered Federated Learning

no code implementations4 Sep 2023 Yuhao Zhou, Minjia Shi, Yuxin Tian, Yuanxi Li, Qing Ye, Jiancheng Lv

However, a significant challenge arises when coordinating FL with crowd intelligence which diverse client groups possess disparate objectives due to data heterogeneity or distinct tasks.

Clustering Federated Learning +1

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

1 code implementation23 May 2023 Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang

For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8. 0\%$ on GSM8K and $17. 8\%$ on MultiArith; it also improves the performance of CoT by $6. 0\%$ on GSM8K and $6. 0\%$ on MathQA, respectively.

GSM8K

Personalized Federated Learning with Hidden Information on Personalized Prior

no code implementations19 Nov 2022 Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv

Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.

 Ranked #1 on Image Classification on Fashion-MNIST (Accuracy metric)

Classification Image Classification +2

Robust Lottery Tickets for Pre-trained Language Models

2 code implementations ACL 2022 Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang

Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.

Adversarial Robustness

DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models

1 code implementation2 Nov 2022 Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu

The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples.

Text-to-Image Generation

Fast Instrument Learning with Faster Rates

1 code implementation22 May 2022 Ziyu Wang, Yuhao Zhou, Jun Zhu

We investigate nonlinear instrumental variable (IV) regression given high-dimensional instruments.

Model Selection regression +1

DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg

no code implementations6 Apr 2022 Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv

Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.

Federated Learning

Fuse Local and Global Semantics in Representation Learning

no code implementations28 Feb 2022 Yuchi Zhao, Yuhao Zhou

We propose Fuse Local and Global Semantics in Representation Learning (FLAGS) to generate richer representations.

Representation Learning

Gradient Estimation with Discrete Stein Operators

1 code implementation19 Feb 2022 Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey

Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems.

Quasi-Bayesian Dual Instrumental Variable Regression

1 code implementation NeurIPS 2021 Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu

Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.

Bayesian Inference regression +1

Scalable Quasi-Bayesian Inference for Instrumental Variable Regression

no code implementations NeurIPS 2021 Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu

Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.

Bayesian Inference regression +1

LANA: Towards Personalized Deep Knowledge Tracing Through Distinguishable Interactive Sequences

1 code implementation21 Apr 2021 Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv

With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.

Knowledge Tracing

Communication-Efficient Federated Learning with Compensated Overlap-FedAvg

1 code implementation12 Dec 2020 Yuhao Zhou, Ye Qing, Jiancheng Lv

Petabytes of data are generated each day by emerging Internet of Things (IoT), but only few of them can be finally collected and used for Machine Learning (ML) purposes due to the apprehension of data & privacy leakage, which seriously retarding ML's growth.

Data Compression Federated Learning

Limits of PageRank-based ranking methods in sports data

no code implementations11 Dec 2020 Yuhao Zhou, Ruijie Wang, Yi-Cheng Zhang, An Zeng, Matúš Medo

We propose a new PageRank variant which outperforms PageRank in all evaluated settings, yet shares its sensitivity to increased randomness in the data.

HPSGD: Hierarchical Parallel SGD With Stale Gradients Featuring

1 code implementation6 Sep 2020 Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv

While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.

DBS: Dynamic Batch Size For Distributed Deep Neural Network Training

1 code implementation23 Jul 2020 Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv

Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.

Nonparametric Score Estimators

1 code implementation ICML 2020 Yuhao Zhou, Jiaxin Shi, Jun Zhu

Estimating the score, i. e., the gradient of log density function, from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models that involve flexible yet intractable densities.

Neural Graph Evolution: Towards Efficient Automatic Robot Design

1 code implementation12 Jun 2019 Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba

To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space.

Neural Graph Evolution: Automatic Robot Design

no code implementations ICLR 2019 Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba

To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space.

Now You Shake Me: Towards Automatic 4D Cinema

no code implementations CVPR 2018 Yuhao Zhou, Makarand Tapaswi, Sanja Fidler

We are interested in enabling automatic 4D cinema by parsing physical and special effects from untrimmed movies.

ZhuSuan: A Library for Bayesian Deep Learning

1 code implementation18 Sep 2017 Jiaxin Shi, Jianfei Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, Yuhao Zhou

In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning.

Probabilistic Programming regression

Cannot find the paper you are looking for? You can Submit a new open access paper.