Search Results for author: Hongzhou Lin

Found 14 papers, 4 papers with code

Complexity of Finding Stationary Points of Nonconvex Nonsmooth Functions

no code implementations ICML 2020 Jingzhao Zhang, Hongzhou Lin, Stefanie Jegelka, Suvrit Sra, Ali Jadbabaie

Therefore, we introduce the notion of (delta, epsilon)-stationarity, a generalization that allows for a point to be within distance delta of an epsilon-stationary point and reduces to epsilon-stationarity for smooth functions.

Unmemorization in Large Language Models via Self-Distillation and Deliberate Imagination

1 code implementation15 Feb 2024 Yijiang River Dong, Hongzhou Lin, Mikhail Belkin, Ramon Huerta, Ivan Vulić

Our results demonstrate the usefulness of this approach across different models and sizes, and also with parameter-efficient fine-tuning, offering a novel pathway to addressing the challenges with private and sensitive data in LLM applications.

Natural Language Understanding

Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning

no code implementations NeurIPS 2021 Ligeng Zhu, Hongzhou Lin, Yao Lu, Yujun Lin, Song Han

Federated Learning is an emerging direction in distributed machine learning that en-ables jointly training a model without sharing the data.

Federated Learning

Stochastic Optimization with Non-stationary Noise: The Power of Moment Estimation

no code implementations1 Jan 2021 Jingzhao Zhang, Hongzhou Lin, Subhro Das, Suvrit Sra, Ali Jadbabaie

In particular, standard results on optimal convergence rates for stochastic optimization assume either there exists a uniform bound on the moments of the gradient noise, or that the noise decays as the algorithm progresses.

Stochastic Optimization

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method

no code implementations NeurIPS 2020 Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gürbüzbalaban, Stefanie Jegelka, Hongzhou Lin

We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.

Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions

no code implementations10 Feb 2020 Jingzhao Zhang, Hongzhou Lin, Stefanie Jegelka, Ali Jadbabaie, Suvrit Sra

In particular, we study the class of Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions for which the chain rule of calculus holds.

On the Complexity of Minimizing Convex Finite Sums Without Using the Indices of the Individual Functions

no code implementations9 Feb 2020 Yossi Arjevani, Amit Daniely, Stefanie Jegelka, Hongzhou Lin

Recent advances in randomized incremental methods for minimizing $L$-smooth $\mu$-strongly convex finite sums have culminated in tight complexity of $\tilde{O}((n+\sqrt{n L/\mu})\log(1/\epsilon))$ and $O(n+\sqrt{nL/\epsilon})$, where $\mu>0$ and $\mu=0$, respectively, and $n$ denotes the number of individual functions.

Perceptual Regularization: Visualizing and Learning Generalizable Representations

no code implementations25 Sep 2019 Hongzhou Lin, Joshua Robinson, Stefanie Jegelka

We propose a technique termed perceptual regularization that enables both visualization of the latent representation and control over the generality of the learned representation.

ResNet with one-neuron hidden layers is a Universal Approximator

1 code implementation NeurIPS 2018 Hongzhou Lin, Stefanie Jegelka

We demonstrate that a very deep ResNet with stacked modules with one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in $d$ dimensions, i. e. $\ell_1(\mathbb{R}^d)$.

Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice

1 code implementation15 Dec 2017 Hongzhou Lin, Julien Mairal, Zaid Harchaoui

One of the keys to achieve acceleration in theory and in practice is to solve these sub-problems with appropriate accuracy by using the right stopping criterion and the right warm-start strategy.

Catalyst Acceleration for Gradient-Based Non-Convex Optimization

no code implementations31 Mar 2017 Courtney Paquette, Hongzhou Lin, Dmitriy Drusvyatskiy, Julien Mairal, Zaid Harchaoui

We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions.

An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration

1 code implementation4 Oct 2016 Hongzhou Lin, Julien Mairal, Zaid Harchaoui

We propose an inexact variable-metric proximal point algorithm to accelerate gradient-based optimization algorithms.

Cannot find the paper you are looking for? You can Submit a new open access paper.