Search Results for author: Shangzhi Zeng

Found 13 papers, 5 papers with code

Constrained Bi-Level Optimization: Proximal Lagrangian Value function Approach and Hessian-free Algorithm

no code implementations29 Jan 2024 Wei Yao, Chengming Yu, Shangzhi Zeng, Jin Zhang

To address this challenge, we begin by devising a smooth proximal Lagrangian value function to handle the constrained lower-level problem.

Moreau Envelope Based Difference-of-weakly-Convex Reformulation and Algorithm for Bilevel Programs

no code implementations29 Jun 2023 Lucy L. Gao, Jane J. Ye, Haian Yin, Shangzhi Zeng, Jin Zhang

In a recent study by Ye et al. (2023), a value function-based difference of convex algorithm was introduced to address bilevel programs.

Hierarchical Optimization-Derived Learning

no code implementations11 Feb 2023 Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang

In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks.

Averaged Method of Multipliers for Bi-Level Optimization without Lower-Level Strong Convexity

1 code implementation7 Feb 2023 Risheng Liu, Yaohua Liu, Wei Yao, Shangzhi Zeng, Jin Zhang

Gradient methods have become mainstream techniques for Bi-Level Optimization (BLO) in learning fields.

Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-training

no code implementations16 Jun 2022 Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang

Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization.

Image Deconvolution

Value Function Based Difference-of-Convex Algorithm for Bilevel Hyperparameter Selection Problems

1 code implementation13 Jun 2022 Lucy Gao, Jane J. Ye, Haian Yin, Shangzhi Zeng, Jin Zhang

Gradient-based optimization methods for hyperparameter tuning guarantee theoretical convergence to stationary solutions when for fixed upper-level variable values, the lower level of the bilevel program is strongly convex (LLSC) and smooth (LLS).

Towards Extremely Fast Bilevel Optimization with Self-governed Convergence Guarantees

no code implementations20 May 2022 Risheng Liu, Xuan Liu, Wei Yao, Shangzhi Zeng, Jin Zhang

Gradient methods have become mainstream techniques for Bi-Level Optimization (BLO) in learning and vision fields.

Bilevel Optimization

Value-Function-based Sequential Minimization for Bi-level Optimization

1 code implementation11 Oct 2021 Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang

We also extend BVFSM to address BLO with additional functional constraints.

Towards Gradient-based Bilevel Optimization with Non-convex Followers and Beyond

1 code implementation NeurIPS 2021 Risheng Liu, Yaohua Liu, Shangzhi Zeng, Jin Zhang

In particular, by introducing an auxiliary as initialization to guide the optimization dynamics and designing a pessimistic trajectory truncation operation, we construct a reliable approximate version of the original BLO in the absence of LLC hypothesis.

Bilevel Optimization

A Value-Function-based Interior-point Method for Non-convex Bi-level Optimization

no code implementations15 Jun 2021 Risheng Liu, Xuan Liu, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang

Bi-level optimization model is able to capture a wide range of complex learning tasks with practical interest.

A General Descent Aggregation Framework for Gradient-based Bi-level Optimization

1 code implementation16 Feb 2021 Risheng Liu, Pan Mu, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang

In this work, we formulate BLOs from an optimistic bi-level viewpoint and establish a new gradient-based algorithmic framework, named Bi-level Descent Aggregation (BDA), to partially address the above issues.

Meta-Learning

A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Level Singleton

no code implementations ICML 2020 Risheng Liu, Pan Mu, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang

In recent years, a variety of gradient-based first-order methods have been developed to solve bi-level optimization problems for learning applications.

Meta-Learning

Task-Oriented Convex Bilevel Optimization with Latent Feasibility

no code implementations6 Jul 2019 Risheng Liu, Long Ma, Xiaoming Yuan, Shangzhi Zeng, Jin Zhang

This paper firstly proposes a convex bilevel optimization paradigm to formulate and optimize popular learning and vision problems in real-world scenarios.

Bilevel Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.