1 code implementation • 16 Feb 2023 • Shaowu Chen, Weize Sun, Lei Huang
Filter pruning has attracted increasing attention in recent years for its capacity in compressing and accelerating convolutional neural networks.
no code implementations • 12 Dec 2022 • Chengyu Zheng, Ning Song, Ruoyu Zhang, Lei Huang, Zhiqiang Wei, Jie Nie
To address these issues, we propose a novel Scale-Semantic Joint Decoupling Network (SSJDN) for remote sensing image-text retrieval.
1 code implementation • 7 Nov 2022 • Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie
Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.
no code implementations • 22 Oct 2022 • Xuhua Li, Weize Sun, Lei Huang, Shaowu Chen
Filter pruning is a common method to achieve model compression and acceleration in deep neural networks (DNNs). Some research regarded filter pruning as a combinatorial optimization problem and thus used evolutionary algorithms (EA) to prune filters of DNNs.
1 code implementation • 11 Oct 2022 • Jiaxi Wang, Ji Wu, Lei Huang
Batch Normalization (BN) is a core and prevalent technique in accelerating the training of deep neural networks and improving the generalization on Computer Vision (CV) tasks.
1 code implementation • 7 Oct 2022 • Xi Weng, Lei Huang, Lei Zhao, Rao Muhammad Anwer, Salman Khan, Fahad Shahbaz Khan
A desirable objective in self-supervised learning (SSL) is to avoid feature collapse.
no code implementations • 13 Sep 2022 • Lei Huang, Hengtong Zhang, Tingyang Xu, Ka-Chun Wong
At the same time, the generated molecules lack enough diversity.
no code implementations • 16 Jun 2022 • Wei Shao, Lei Huang, Shuqi Liu, Shihua Ma, Linqi Song
In this paper, we propose an embedding regularized neural topic model, which applies the specially designed training constraints on word embedding and topic embedding to reduce the optimization space of parameters.
no code implementations • CVPR 2022 • Ge Kan, Jinhu Lü, Tian Wang, Baochang Zhang, Aichun Zhu, Lei Huang, Guodong Guo, Hichem Snoussi
In this paper, we propose Bi-level doubly variational learning (BiDVL), which is based on a new bi-level optimization framework and two tractable variational distributions to facilitate learning EBLVMs.
1 code implementation • CVPR 2022 • Jiawei Zhang, Xiang Wang, Xiao Bai, Chen Wang, Lei Huang, Yimin Chen, Lin Gu, Jun Zhou, Tatsuya Harada, Edwin R. Hancock
The stereo contrastive feature loss function explicitly constrains the consistency between learned features of matching pixel pairs which are observations of the same 3D points.
1 code implementation • CVPR 2022 • Lei Huang, Yi Zhou, Tian Wang, Jie Luo, Xianglong Liu
We define the estimation shift magnitude of BN to quantitatively measure the difference between its estimated population statistics and expected ones.
2 code implementations • NeurIPS 2021 • Tengwei Song, Jie Luo, Lei Huang
In this paper, we first theoretically show that the transitive relations can be modeled with projections.
Ranked #11 on
Link Prediction
on YAGO3-10
no code implementations • 29 Sep 2021 • Aishan Liu, Shiyu Tang, Xianglong Liu, Xinyun Chen, Lei Huang, Haotong Qin, Dawn Song, DaCheng Tao
We observe that different $\ell_p$ bounded adversarial perturbations induce different statistical properties that can be separated and characterized by the statistics of Batch Normalization (BN).
1 code implementation • 9 Jul 2021 • Shaowu Chen, Jiahao Zhou, Weize Sun, Lei Huang
To overcome this problem, we propose to compress CNNs and alleviate performance degradation via joint matrix decomposition, which is different from existing works that compressed layers separately.
Efficient Neural Network
Matrix Factorization / Decomposition
+1
2 code implementations • 4 Jul 2021 • J. Gregory Pauloski, Qi Huang, Lei Huang, Shivaram Venkataraman, Kyle Chard, Ian Foster, Zhao Zhang
Kronecker-factored Approximate Curvature (K-FAC) has recently been shown to converge faster in deep neural network (DNN) training than stochastic gradient descent (SGD); however, K-FAC's larger memory footprint hinders its applicability to large models.
no code implementations • 26 Feb 2021 • Yi Zhou, Lei Huang, Tianfei Zhou, Ling Shao
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
1 code implementation • CVPR 2021 • Zhiqiang Shen, Zechun Liu, Jie Qin, Lei Huang, Kwang-Ting Cheng, Marios Savvides
In this paper, we focus on this more difficult scenario: learning networks where both weights and activations are binary, meanwhile, without any human annotated labels.
1 code implementation • 25 Jan 2021 • Lei Huang, Jiecong Lin, Xiangtao Li, Linqi Song, Ka-Chun Wong
To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data.
no code implementations • ICCV 2021 • Yi Zhou, Lei Huang, Tao Zhou, Huazhu Fu, Ling Shao
Second, the progressive report decoder consists of a sentence decoder and a word decoder, where we propose image-sentence matching and description accuracy losses to constrain the visual-textual semantic consistency.
no code implementations • ICCV 2021 • Yi Zhou, Lei Huang, Tao Zhou, Ling Shao
A category-invariant cross-domain transfer (CCT) method is proposed to address this single-to-multiple extension.
no code implementations • 19 Dec 2020 • Yu-Hang Xiao, David Ramírez, Peter J. Schreier, Cheng Qian, Lei Huang
Target detection is an important problem in multiple-input multiple-output (MIMO) radar.
1 code implementation • 10 Dec 2020 • Liang Hou, Zehuan Yuan, Lei Huang, HuaWei Shen, Xueqi Cheng, Changhu Wang
In particular, for real-time generation tasks, different devices require generators of different sizes due to varying computing power.
no code implementations • 3 Dec 2020 • Aishan Liu, Shiyu Tang, Xianglong Liu, Xinyun Chen, Lei Huang, Zhuozhuo Tu, Dawn Song, DaCheng Tao
To better understand this phenomenon, we propose the \emph{multi-domain} hypothesis, stating that different types of adversarial perturbations are drawn from different domains.
1 code implementation • 14 Oct 2020 • Nalinda Kulathunga, Nishath Rajiv Ranasinghe, Daniel Vrinceanu, Zackary Kinsman, Lei Huang, Yunjiao Wang
The nonlinearity of activation functions used in deep learning models are crucial for the success of predictive models.
1 code implementation • CVPR 2021 • Lei Huang, Yi Zhou, Li Liu, Fan Zhu, Ling Shao
Results show that GW consistently improves the performance of different architectures, with absolute gains of $1. 02\%$ $\sim$ $1. 49\%$ in top-1 accuracy on ImageNet and $1. 82\%$ $\sim$ $3. 21\%$ in bounding box AP on COCO.
no code implementations • 27 Sep 2020 • Lei Huang, Jie Qin, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
Normalization techniques are essential for accelerating the training and improving the generalization of deep neural networks (DNNs), and have successfully been used in various applications.
no code implementations • 22 Aug 2020 • Yi Zhou, Boyang Wang, Lei Huang, Shanshan Cui, Ling Shao
This dataset has 1, 842 images with pixel-level DR-related lesion annotations, and 1, 000 images with image-level labels graded by six board-certified ophthalmologists with intra-rater consistency.
1 code implementation • ECCV 2020 • Yuming Shen, Jie Qin, Lei Huang
Deep generative models have been successfully applied to Zero-Shot Learning (ZSL) recently.
3 code implementations • 1 Jul 2020 • J. Gregory Pauloski, Zhao Zhang, Lei Huang, Weijia Xu, Ian T. Foster
Training neural networks with many processors can reduce time-to-solution; however, it is challenging to maintain convergence and efficiency at large scales.
no code implementations • 30 May 2020 • Pengyuan Li, Lei Huang, Guang-jie Ren
As the sentiments are typically short, we combine sentiments talking about the same aspect into a single document and apply topic modeling method to identify hidden topics among customer reviews and summaries.
1 code implementation • CVPR 2020 • Lei Huang, Li Liu, Fan Zhu, Diwen Wan, Zehuan Yuan, Bo Li, Ling Shao
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1 and reduce redundancy in representation.
1 code implementation • 1 Apr 2020 • Lei Zhao, Xiaohui Wang, Lei Huang
Capsule networks (CapsNets) are capable of modeling visual hierarchical relationships, which is achieved by the "routing-by-agreement" mechanism.
1 code implementation • CVPR 2020 • Lei Huang, Lei Zhao, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs).
no code implementations • LREC 2020 • Izhak Shafran, Nan Du, Linh Tran, Amanda Perry, Lauren Keyes, Mark Knichel, Ashley Domin, Lei Huang, Yu-Hui Chen, Gang Li, Mingqiu Wang, Laurent El Shafey, Hagen Soltau, Justin S. Paul
We used this annotation scheme to label a corpus of about 6k clinical encounters.
no code implementations • ECCV 2020 • Lei Huang, Jie Qin, Li Liu, Fan Zhu, Ling Shao
To this end, we propose layer-wise conditioning analysis, which explores the optimization landscape with respect to each layer independently.
no code implementations • 1 Feb 2020 • Bin Wen, Jie Luo, Xianglong Liu, Lei Huang
Extracting graph representation of visual scenes in image is a challenging task in computer vision.
no code implementations • 3 Jul 2019 • Wei Li, Zehuan Yuan, Dashan Guo, Lei Huang, Xiangzhong Fang, Changhu Wang
To perform action detection, we design a 3D convolution network with skip connections for tube classification and regression.
no code implementations • 18 Apr 2019 • Xianglong Liu, Lei Huang, Cheng Deng, Bo Lang, DaCheng Tao
For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search.
5 code implementations • CVPR 2019 • Lei Huang, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
With the support of SND, we provide natural explanations to several phenomena from the perspective of optimization, e. g., why group-wise whitening of DBN generally outperforms full-whitening and why the accuracy of BN degenerates with reduced batch sizes.
no code implementations • 11 Jan 2019 • Yizhi Liu, Xiaoyan Gu, Lei Huang, Junlin Ouyang, Miao Liao, Liangran Wu
Content-based adult video detection plays an important role in preventing pornography.
no code implementations • 29 Nov 2018 • Siwen Jiang, Wenxuan Wei, Shihao Guo, Hongguang Fu, Lei Huang
At present, the great achievements of convolutional neural network(CNN) in feature and metric learning have attracted many researchers.
1 code implementation • 27 Sep 2018 • Zhao Zhang, Lei Huang, Uri Manor, Linjing Fang, Gabriele Merlo, Craig Michoski, John Cazes, Niall Gaffney
Our experiments with benchmarks and real applications show that FanStore can scale DL training to 512 compute nodes with over 90\% scaling efficiency.
Distributed, Parallel, and Cluster Computing
no code implementations • 5 Jun 2018 • Lei Huang, Guang-jie Ren, Shun Jiang, Raphael Arar, Eric Young Liu
Business Architecture (BA) plays a significant role in helping organizations understand enterprise structures and processes, and align them with strategic objectives.
6 code implementations • CVPR 2018 • Lei Huang, Dawei Yang, Bo Lang, Jia Deng
Batch Normalization (BN) is capable of accelerating the training of deep models by centering and scaling activations within mini-batches.
1 code implementation • The Thirty-Second AAAI Conferenceon Artificial Intelligence 2018 • Lei Huang, Xianglong Liu, Bo Lang, Adams Wei Yu, Yongliang Wang, Bo Li
In this paper, we generalize such square orthogonal matrix to orthogonal rectangular matrix and formulating this problem in feed-forward Neural Networks (FNNs) as Optimization over Multiple Dependent Stiefel Manifolds (OMDSM).
no code implementations • 10 Dec 2017 • Bo Wu, Yang Liu, Bo Lang, Lei Huang
Convolutional neural networks (CNNs) can be applied to graph similarity matching, in which case they are called graph CNNs.
Ranked #1 on
Graph Classification
on AIDS
1 code implementation • 6 Oct 2017 • Lei Huang, Xianglong Liu, Bo Lang, Bo Li
We conduct comprehensive experiments on several widely-used image datasets including CIFAR-10, CIFAR-100, SVHN and ImageNet for supervised learning over the state-of-the-art convolutional neural networks, such as Inception, VGG and residual networks.
1 code implementation • ICCV 2017 • Lei Huang, Xianglong Liu, Yang Liu, Bo Lang, DaCheng Tao
Training deep neural networks is difficult for the pathological curvature problem.
1 code implementation • 16 Sep 2017 • Lei Huang, Xianglong Liu, Bo Lang, Adams Wei Yu, Yongliang Wang, Bo Li
In this paper, we generalize such square orthogonal matrix to orthogonal rectangular matrix and formulating this problem in feed-forward Neural Networks (FNNs) as Optimization over Multiple Dependent Stiefel Manifolds (OMDSM).
2 code implementations • ICLR 2018 • Adams Wei Yu, Lei Huang, Qihang Lin, Ruslan Salakhutdinov, Jaime Carbonell
In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization.
no code implementations • 31 May 2017 • Chao Zuo, Tianyang Tao, Shijie Feng, Lei Huang, Anand Asundi, Qian Chen
Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy.
no code implementations • ICCV 2015 • Xianglong Liu, Lei Huang, Cheng Deng, Jiwen Lu, Bo Lang
have enjoyed the benefits of complementary hash tables and information fusion over multiple views.
no code implementations • 2 Dec 2014 • Philip T. Reiss, Lei Huang, Huaihou Chen, Stan Colcombe
We discuss three approaches to estimating varying-smoother models: (a) methods that employ a tensor product penalty; (b) an approach based on smoothed functional principal component scores; and (c) two-step methods consisting of an initial smooth with respect to $t$ at each $s$, followed by a postprocessing step.
Methodology