Search Results for author: Yucheng Lu

Found 14 papers, 7 papers with code

Source Matters: Source Dataset Impact on Model Robustness in Medical Imaging

1 code implementation7 Mar 2024 Dovile Juodelyte, Yucheng Lu, Amelia Jiménez-Sánchez, Sabrina Bottazzi, Enzo Ferrante, Veronika Cheplygina

However, the domain shift from natural to medical images has prompted alternatives such as RadImageNet, often demonstrating comparable classification performance.

Classification Transfer Learning

Coordinating Distributed Example Orders for Provably Accelerated Training

1 code implementation NeurIPS 2023 A. Feder Cooper, Wentao Guo, Khiem Pham, Tiancheng Yuan, Charlie F. Ruan, Yucheng Lu, Christopher De Sa

Recent research on online Gradient Balancing (GraB) has revealed that there exist permutation-based example orderings for SGD that are guaranteed to outperform random reshuffling (RR).

STEP: Learning N:M Structured Sparsity Masks from Scratch with Precondition

no code implementations2 Feb 2023 Yucheng Lu, Shivani Agrawal, Suvinay Subramanian, Oleg Rybakov, Christopher De Sa, Amir Yazdanbakhsh

Recent innovations on hardware (e. g. Nvidia A100) have motivated learning N:M structured sparsity masks from scratch for fast model inference.

Machine Translation

RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided Self-Exemplars

1 code implementation24 Aug 2022 Jun-Sang Yoo, Dong-Wook Kim, Yucheng Lu, Seung-Won Jung

To advance ZSSR, we obtain reference image patches with rich textures and high-frequency details which are also extracted only from the input image using cross-scale matching.

Image Super-Resolution

GraB: Finding Provably Better Data Permutations than Random Reshuffling

2 code implementations22 May 2022 Yucheng Lu, Wentao Guo, Christopher De Sa

To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension.

Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam

1 code implementation12 Feb 2022 Yucheng Lu, Conglong Li, Minjia Zhang, Christopher De Sa, Yuxiong He

1-bit gradient compression and local steps are two representative techniques that enable drastic communication reduction in distributed SGD.

Open-Ended Question Answering

A General Analysis of Example-Selection for Stochastic Gradient Descent

no code implementations ICLR 2022 Yucheng Lu, Si Yi Meng, Christopher De Sa

In this paper, we develop a broad condition on the sequence of examples used by SGD that is sufficient to prove tight convergence rates in both strongly convex and non-convex settings.

Data Augmentation

Progressive Joint Low-light Enhancement and Noise Removal for Raw Images

1 code implementation28 Jun 2021 Yucheng Lu, Seung-Won Jung

Low-light imaging on mobile devices is typically challenging due to insufficient incident light coming through the relatively small aperture, resulting in a low signal-to-noise ratio.

Denoising

Variance Reduced Training with Stratified Sampling for Forecasting Models

no code implementations2 Mar 2021 Yucheng Lu, Youngsuk Park, Lifan Chen, Yuyang Wang, Christopher De Sa, Dean Foster

In large-scale time series forecasting, one often encounters the situation where the temporal patterns of time series, while drifting over time, differ from one another in the same dataset.

Time Series Time Series Forecasting

Hyperparameter Optimization Is Deceiving Us, and How to Stop It

1 code implementation NeurIPS 2021 A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, Christopher De Sa

Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research.

Hyperparameter Optimization

Optimal Complexity in Decentralized Training

no code implementations15 Jun 2020 Yucheng Lu, Christopher De Sa

Decentralization is a promising method of scaling up parallel machine learning systems.

Image Classification

Moniqua: Modulo Quantized Communication in Decentralized SGD

no code implementations ICML 2020 Yucheng Lu, Christopher De Sa

Running Stochastic Gradient Descent (SGD) in a decentralized fashion has shown promising results.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.