Search Results for author: Shaogao Lv

Found 11 papers, 0 papers with code

Personalized Federated Learning via Amortized Bayesian Meta-Learning

no code implementations5 Jul 2023 Shiyu Liu, Shaogao Lv, Dun Zeng, Zenglin Xu, Hui Wang, Yue Yu

Federated learning is a decentralized and privacy-preserving technique that enables multiple clients to collaborate with a server to learn a global model without exposing their private data.

Meta-Learning Personalized Federated Learning +2

Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix

no code implementations5 Jul 2023 Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li

Overall, our research highlights the importance of integrating feature and graph information alignment in GSL, as inspired by our derived theoretical result, and showcases the superiority of our approach in handling noisy graph structures through comprehensive experiments on real-world datasets.

Graph structure learning

Stability and Generalization of lp-Regularized Stochastic Learning for GCN

no code implementations20 May 2023 Shiyu Liu, Linsen Wei, Shaogao Lv, Ming Li

For a single-layer GCN, we establish an explicit theoretical understanding of GCN with the $\ell_p$-regularized stochastic learning by analyzing the stability of our SGD proximal algorithm.

Graph Learning

Improved Learning Rates of a Functional Lasso-type SVM with Sparse Multi-Kernel Representation

no code implementations NeurIPS 2021 Shaogao Lv, Junhui Wang, Jiankun Liu, Yong liu

In this paper, we provide theoretical results of estimation bounds and excess risk upper bounds for support vector machine (SVM) with sparse multi-kernel representation.

SStaGCN: Simplified stacking based graph convolutional networks

no code implementations16 Nov 2021 Jia Cai, Zhilong Xiong, Shaogao Lv

Graph convolutional network (GCN) is a powerful model studied broadly in various graph structural data learning tasks.

Kernel-based estimation for partially functional linear model: Minimax rates and randomized sketches

no code implementations18 Oct 2021 Shaogao Lv, Xin He, Junhui Wang

This paper considers the partially functional linear model (PFLM) where all predictive features consist of a functional covariate and a high dimensional scalar vector.

Communication-efficient Byzantine-robust distributed learning with statistical guarantee

no code implementations28 Feb 2021 Xingcai Zhou, Le Chang, Pengfei Xu, Shaogao Lv

To address the two issues simultaneously, this paper develops two communication-efficient and robust distributed learning algorithms for convex problems.

Generalization bounds for graph convolutional neural networks via Rademacher complexity

no code implementations20 Feb 2021 Shaogao Lv

This paper aims at studying the sample complexity of graph convolutional networks (GCNs), by providing tight upper bounds of Rademacher complexity for GCN models with a single hidden layer.

Generalization Bounds

Financial Market Directional Forecasting With Stacked Denoising Autoencoder

no code implementations2 Dec 2019 Shaogao Lv, Yongchao Hou, Hongwei Zhou

Forecasting stock market direction is always an amazing but challenging problem in finance.

Denoising

Efficient kernel-based variable selection with sparsistency

no code implementations26 Feb 2018 Xin He, Junhui Wang, Shaogao Lv

Variable selection is central to high-dimensional data analysis, and various algorithms have been developed.

Variable Selection

Debiased distributed learning for sparse partial linear models in high dimensions

no code implementations18 Aug 2017 Shaogao Lv, Heng Lian

Although various distributed machine learning schemes have been proposed recently for pure linear models and fully nonparametric models, little attention has been paid on distributed optimization for semi-paramemetric models with multiple-level structures (e. g. sparsity, linearity and nonlinearity).

Distributed Optimization Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.