no code implementations • 3 Sep 2024 • Halyun Jeong, Jihun Han
Fourier embedding has shown great promise in removing spectral bias during neural network training.
no code implementations • 2 Mar 2024 • Halyun Jeong, Deanna Needell, Elizaveta Rebrova
We propose SGD-exp, a stochastic gradient descent approach for linear and ReLU regressions under Massart noise (adversarial semi-random corruption model) for the fully streaming setting.
no code implementations • 20 Apr 2023 • Halyun Jeong, Deanna Needell
The Kaczmarz method (KZ) and its variants, which are types of stochastic gradient descent (SGD) methods, have been extensively studied due to their simplicity and efficiency in solving linear equation systems.
no code implementations • 20 Feb 2023 • Halyun Jeong, Deanna Needell, Jing Qin
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
no code implementations • 23 Dec 2020 • Michael P. Friedlander, Halyun Jeong, Yaniv Plan, Ozgur Yilmaz
The Binary Iterative Hard Thresholding (BIHT) algorithm is a popular reconstruction method for one-bit compressed sensing due to its simplicity and fast empirical convergence.
Information Theory Numerical Analysis Information Theory Numerical Analysis 94-XX
1 code implementation • 14 Oct 2020 • Zhenan Fan, Halyun Jeong, Babhru Joshi, Michael P. Friedlander
The signal demixing problem seeks to separate a superposition of multiple signals into its constituent components.
no code implementations • 28 Jan 2020 • Halyun Jeong, Xiaowei Li, Yaniv Plan, Özgür Yılmaz
In many applications, e. g., compressed sensing, this norm may be large, or even growing with dimension, and thus it is important to characterize this dependence.