Search Results for author: Xiaohan Chen

Found 33 papers, 22 papers with code

Rethinking the Capacity of Graph Neural Networks for Branching Strategy

no code implementations11 Feb 2024 Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin

Graph neural networks (GNNs) have been widely used to predict properties and heuristics of mixed-integer linear programs (MILPs) and hence accelerate MILP solvers.

DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee

1 code implementation20 Oct 2023 Haoyu Wang, Jialin Liu, Xiaohan Chen, Xinshang Wang, Pan Li, Wotao Yin

Mixed-integer linear programming (MILP) stands as a notable NP-hard problem pivotal to numerous crucial industrial applications.

Data Augmentation

Towards Constituting Mathematical Structures for Learning to Optimize

1 code implementation29 May 2023 Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, HanQin Cai

Learning to Optimize (L2O), a technique that utilizes machine learning to learn an optimization algorithm automatically from data, has gained arising attention in recent years.

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

1 code implementation ICLR 2022 Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy

In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks.

Adversarial Robustness Out-of-Distribution Detection

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

1 code implementation18 Dec 2021 Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen

Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.

Federated Learning

Hyperparameter Tuning is All You Need for LISTA

1 code implementation NeurIPS 2021 Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin

Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network.

Rolling Shutter Correction

Lottery Image Prior

no code implementations29 Sep 2021 Qiming Wu, Xiaohan Chen, Yifan Jiang, Pan Zhou, Zhangyang Wang

Drawing inspirations from the recently prosperous research on lottery ticket hypothesis (LTH), we conjecture and study a novel “lottery image prior” (LIP), stated as: given an (untrained or trained) DNN-based image prior, it will have a sparse subnetwork that can be training in isolation, to match the original DNN’s performance when being applied as a prior to various image inverse problems.

Compressive Sensing Image Reconstruction +1

Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently

no code implementations ICLR 2022 Xiaohan Chen, Jason Zhang, Zhangyang Wang

In this work, we define an extended class of subnetworks in randomly initialized NNs called disguised subnetworks, which are not only "hidden" in the random networks but also "disguised" -- hence can only be "unmasked" with certain transformations on weights.

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

2 code implementations NeurIPS 2021 Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang

Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis.

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

2 code implementations NeurIPS 2021 Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu

Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization).

Network Pruning Sparse Learning

The Elastic Lottery Ticket Hypothesis

1 code implementation NeurIPS 2021 Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang

Based on these results, we articulate the Elastic Lottery Ticket Hypothesis (E-LTH): by mindfully replicating (or dropping) and re-ordering layers for one network, its corresponding winning ticket could be stretched (or squeezed) into a subnetwork for another deeper (or shallower) network from the same family, whose performance is nearly the same competitive as the latter's winning ticket directly found by IMP.

Learning to Optimize: A Primer and A Benchmark

1 code implementation23 Mar 2021 Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin

It automates the design of an optimization method based on its performance on a set of training problems.

Benchmarking

SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training

1 code implementation4 Jan 2021 Xiaohan Chen, Yang Zhao, Yue Wang, Pengfei Xu, Haoran You, Chaojian Li, Yonggan Fu, Yingyan Lin, Zhangyang Wang

Results show that: 1) applied to inference, SD achieves up to 2. 44x energy efficiency as evaluated via real hardware implementations; 2) applied to training, SD leads to 10. 56x and 4. 48x reduction in the storage and training energy, with negligible accuracy loss compared to state-of-the-art training baselines.

Learning A Minimax Optimizer: A Pilot Study

no code implementations ICLR 2021 Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang

We first present Twin L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables, respectively.

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

1 code implementation ACL 2021 Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu

Heavily overparameterized language models such as BERT, XLNet and T5 have achieved impressive success in many NLP tasks.

Model Compression

MATE: Plugging in Model Awareness to Task Embedding for Meta Learning

1 code implementation NeurIPS 2020 Xiaohan Chen, Zhangyang Wang, Siyu Tang, Krikamol Muandet

Meta-learning improves generalization of machine learning models when faced with previously unseen tasks by leveraging experiences from different, yet related prior tasks.

feature selection Few-Shot Learning

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation

no code implementations7 May 2020 Yang Zhao, Xiaohan Chen, Yue Wang, Chaojian Li, Haoran You, Yonggan Fu, Yuan Xie, Zhangyang Wang, Yingyan Lin

We present SmartExchange, an algorithm-hardware co-design framework to trade higher-cost memory storage/access for lower-cost computation, for energy-efficient inference of deep neural networks (DNNs).

Model Compression Quantization

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks

1 code implementation ICLR 2020 Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin

Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy.

Safeguarded Learned Convex Optimization

no code implementations4 Mar 2020 Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin

Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data.

Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery

no code implementations3 Mar 2020 Zepeng Huo, Arash Pakbin, Xiaohan Chen, Nathan Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi

Activity recognition in wearable computing faces two key challenges: i) activity characteristics may be context-dependent and change under different contexts or situations; ii) unknown contexts and activities may occur from time to time, requiring flexibility and adaptability of the algorithm.

Clustering Human Activity Recognition +1

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings

no code implementations NeurIPS 2019 Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang

Extensive simulations and ablation studies, with real energy measurements from an FPGA board, confirm the superiority of our proposed strategies and demonstrate remarkable energy savings for training.

Drawing Early-Bird Tickets: Towards More Efficient Training of Deep Networks

2 code implementations26 Sep 2019 Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin

In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as early-bird (EB) tickets, via low-cost training schemes (e. g., early stopping and low-precision training) at large learning rates.

Universal Safeguarded Learned Convex Optimization with Guaranteed Convergence

no code implementations25 Sep 2019 Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin

Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data.

Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

1 code implementation14 May 2019 Ernest K. Ryu, Jialin Liu, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin

Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms.

Denoising

ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA

no code implementations ICLR 2019 Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin

In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning.

Can We Gain More from Orthogonality Regularizations in Training Deep Networks?

1 code implementation NeurIPS 2018 Nitin Bansal, Xiaohan Chen, Zhangyang Wang

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?

1 code implementation NeurIPS 2018 Nitin Bansal, Xiaohan Chen, Zhangyang Wang

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

Cannot find the paper you are looking for? You can Submit a new open access paper.