no code implementations • 28 Mar 2024 • Alireza Ganjdanesh, Shangqian Gao, Heng Huang
We address this challenge by designing a mechanism to model the complex changing dynamics of the reward function and provide a representation of it to the RL agent.
1 code implementation • 21 Mar 2024 • Xidong Wu, Shangqian Gao, Zeyu Zhang, Zhenzhen Li, Runxue Bao, yanfu Zhang, Xiaoqian Wang, Heng Huang
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging.
no code implementations • 22 Dec 2023 • Alireza Ganjdanesh, Shangqian Gao, Hirad Alipanah, Heng Huang
Thus, they neglect the critical characteristic of GANs: their local density structure over their learned manifold.
no code implementations • 2 Dec 2023 • Minchul Kim, Shangqian Gao, Yen-Chang Hsu, Yilin Shen, Hongxia Jin
In this paper, we introduce "Token Fusion" (ToFu), a method that amalgamates the benefits of both token pruning and token merging.
no code implementations • ICCV 2023 • Shangqian Gao, Zeyu Zhang, yanfu Zhang, Feihu Huang, Heng Huang
To mitigate this gap, we first learn a target sub-network during the model training process, and then we use this sub-network to guide the learning of model weights through partial regularization.
1 code implementation • 7 Sep 2022 • Alireza Ganjdanesh, Shangqian Gao, Heng Huang
To fill in this gap, we propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process, thereby utilizing information from both inputs and outputs of the model.
no code implementations • 26 Jul 2021 • Feihu Huang, Junyi Li, Shangqian Gao, Heng Huang
Specifically, we propose a bilevel optimization method based on Bregman distance (BiO-BreD) to solve deterministic bilevel problems, which achieves a lower computational complexity than the best known results.
1 code implementation • ICLR 2022 • Feihu Huang, Shangqian Gao, Heng Huang
In the paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques.
no code implementations • 21 Jun 2021 • Feihu Huang, Junyi Li, Shangqian Gao
To fill this gap, in the paper, we propose a novel fast adaptive bilevel framework to solve stochastic bilevel optimization problems that the outer problem is possibly nonconvex and the inner problem is strongly convex.
1 code implementation • CVPR 2021 • Shangqian Gao, Feihu Huang, Weidong Cai, Heng Huang
Specifically, we train a stand-alone neural network to predict sub-networks' performance and then maximize the output of the network as a proxy of accuracy to guide pruning.
no code implementations • ICCV 2021 • Chao Li, Shangqian Gao, Cheng Deng, Wei Liu, Heng Huang
Specifically, given a target model, we first construct its substitute model to exploit cross-modal correlations within hamming space, with which we create adversarial examples by limitedly querying from a target model.
no code implementations • ICCV 2021 • yanfu Zhang, Shangqian Gao, Heng Huang
In this paper, we focus on the discrimination-aware compression of Convolutional Neural Networks (CNNs).
no code implementations • 1 Jan 2021 • Shangqian Gao, Feihu Huang, Heng Huang
In this paper, we propose a novel channel pruning method to solve the problem of compression and acceleration of Convolutional Neural Networks (CNNs).
no code implementations • 13 Oct 2020 • Feihu Huang, Shangqian Gao
At the same time, we present an effective Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization, which has a sample complexity of $O(\kappa^4\epsilon^{-4})$ for finding an $\epsilon$-stationary solution.
no code implementations • 18 Aug 2020 • Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
Our Acc-MDA achieves a low gradient complexity of $\tilde{O}(\kappa_y^{4. 5}\epsilon^{-3})$ without requiring large batches for finding an $\epsilon$-stationary point.
1 code implementation • ICML 2020 • Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
In particular, we present a non-adaptive version of IS-MBPG method, i. e., IS-MBPG*, which also reaches the best known sample complexity of $O(\epsilon^{-3})$ without any large batches.
no code implementations • CVPR 2020 • Shangqian Gao, Feihu Huang, Jian Pei, Heng Huang
In this paper, we target to address the problem of compression and acceleration of Convolutional Neural Networks (CNNs).
1 code implementation • NeurIPS 2019 • Chao Li, Shangqian Gao, Cheng Deng, De Xie, Wei Liu
Extensive experiments on two cross-modal benchmark datasets show that the adversarial examples produced by our CMLA are efficient in fooling a target deep cross-modal hashing network.
no code implementations • 30 Jul 2019 • Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
Zeroth-order (a. k. a, derivative-free) methods are a class of effective optimization methods for solving complex machine learning problems, where gradients of the objective functions are not available or computationally prohibitive.
no code implementations • CVPR 2019 • Shangqian Gao, Cheng Deng, Heng Huang
Regular model compression methods focus on RGB input.
no code implementations • 29 May 2019 • Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang
In particular, our methods not only reach the best convergence rate $O(1/T)$ for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints.