Search Results for author: Michael P. Friedlander

Found 9 papers, 3 papers with code

Knowledge-Injected Federated Learning

1 code implementation16 Aug 2022 Zhenan Fan, Zirui Zhou, Jian Pei, Michael P. Friedlander, Jiajie Hu, Chengliang Li, Yong Zhang

Federated learning is an emerging technique for training models from decentralized data sets.

Federated Learning

A dual approach for federated learning

1 code implementation26 Jan 2022 Zhenan Fan, Huang Fang, Michael P. Friedlander

We study the federated optimization problem from a dual perspective and propose a new algorithm termed federated dual coordinate descent (FedDCD), which is based on a type of coordinate descent method developed by Necora et al.[Journal of Optimization Theory and Applications, 2017].

Federated Learning

Fair and efficient contribution valuation for vertical federated learning

no code implementations7 Jan 2022 Zhenan Fan, Huang Fang, Zirui Zhou, Jian Pei, Michael P. Friedlander, Yong Zhang

We show that VerFedSV not only satisfies many desirable properties for fairness but is also efficient to compute, and can be adapted to both synchronous and asynchronous vertical federated learning algorithms.

Fairness Federated Learning

NBIHT: An Efficient Algorithm for 1-bit Compressed Sensing with Optimal Error Decay Rate

no code implementations23 Dec 2020 Michael P. Friedlander, Halyun Jeong, Yaniv Plan, Ozgur Yilmaz

The Binary Iterative Hard Thresholding (BIHT) algorithm is a popular reconstruction method for one-bit compressed sensing due to its simplicity and fast empirical convergence.

Information Theory Numerical Analysis Information Theory Numerical Analysis 94-XX

Polar Deconvolution of Mixed Signals

1 code implementation14 Oct 2020 Zhenan Fan, Halyun Jeong, Babhru Joshi, Michael P. Friedlander

The signal demixing problem seeks to separate a superposition of multiple signals into its constituent components.

Online mirror descent and dual averaging: keeping pace in the dynamic case

no code implementations ICML 2020 Huang Fang, Nicholas J. A. Harvey, Victor S. Portella, Michael P. Friedlander

Online mirror descent (OMD) and dual averaging (DA) -- two fundamental algorithms for online convex optimization -- are known to have very similar (and sometimes identical) performance guarantees when used with a fixed learning rate.

Cannot find the paper you are looking for? You can Submit a new open access paper.