Bilevel Optimization
96 papers with code • 0 benchmarks • 0 datasets
Bilevel Optimization is a branch of optimization, which contains a nested optimization problem within the constraints of the outer optimization problem. The outer optimization task is usually referred as the upper level task, and the nested inner optimization task is referred as the lower level task. The lower level problem appears as a constraint, such that only an optimal solution to the lower level optimization problem is a possible feasible candidate to the upper level optimization problem.
Source: Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization
Benchmarks
These leaderboards are used to track progress in Bilevel Optimization
Latest papers
Embarassingly Simple Dataset Distillation
Re-examining the foundational back-propagation through time method, we study the pronounced variance in the gradients, computational burden, and long-term dependencies.
Self-Supervised Dataset Distillation for Transfer Learning
To achieve this, we also introduce the MSE between representations of the inner model and the self-supervised target model on the original full dataset for outer optimization.
Bregman Graph Neural Network
Numerous recent research on graph neural networks (GNNs) has focused on formulating GNN architectures as an optimization problem with the smoothness assumption.
RemovalNet: DNN Fingerprint Removal Attacks
After our DNN fingerprint removal attack, the model distance between the target and surrogate models is x100 times higher than that of the baseline attacks, (2) the RemovalNet is efficient.
HypBO: Accelerating Black-Box Scientific Experiments Using Experts' Hypotheses
Here, we exploit expert human knowledge in the form of hypotheses to direct Bayesian searches more quickly to promising regions of chemical space.
Bilevel Generative Learning for Low-Light Vision
In this study, we propose a generic low-light vision solution by introducing a generative block to convert data from the RAW to the RGB domain.
BiERL: A Meta Evolutionary Reinforcement Learning Framework via Bilevel Optimization
Evolutionary reinforcement learning (ERL) algorithms recently raise attention in tackling complex reinforcement learning (RL) problems due to high parallelism, while they are prone to insufficient exploration or model collapse without carefully tuning hyperparameters (aka meta-parameters).
Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images
Experimental results show that our model can learn color and affine transformations that are more helpful to train an image classifier than predefined DA transformations, which are also more expensive as they need to be selected before the training by grid search on a validation set.
Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective
The proposed method demonstrates flexibility across diverse dataset scales and exhibits multiple advantages in terms of arbitrary resolutions of synthesized images, low training cost and memory consumption with high-resolution synthesis, and the ability to scale up to arbitrary evaluation network architectures.
From Hypergraph Energy Functions to Hypergraph Neural Networks
Hypergraphs are a powerful abstraction for representing higher-order interactions between entities of interest.