no code implementations • ECCV 2020 • Jingwei Xin, Nannan Wang, Xinrui Jiang, Jie Li, Heng Huang, Xinbo Gao
Lighter model and faster inference are the focus of current single image super-resolution (SISR) research.
no code implementations • ICML 2020 • lei luo, yanfu Zhang, Heng Huang
Nonnegative Matrix Factorization (NMF) has become an increasingly important research topic in machine learning.
no code implementations • ICML 2020 • Hongchang Gao, Heng Huang
To address the problem of lacking gradient in many applications, we propose two new stochastic zeroth-order Frank-Wolfe algorithms and theoretically proved that they have a faster convergence rate than existing methods for non-convex problems.
no code implementations • ICML 2020 • Hong Chen, Guodong Liu, Heng Huang
Meanwhile, in these feature selection models, the interactions between features are often ignored or just discussed under prior structure information.
no code implementations • ICML 2020 • Runxue Bao, Bin Gu, Heng Huang
Ordered Weight $L_{1}$-Norms (OWL) is a new family of regularizers for high-dimensional sparse regression.
1 code implementation • 22 Sep 2023 • Kai Huang, Hanyun Yin, Heng Huang, Wei Gao
With the fast growth of LLM-enabled AI applications and democratization of open-souced LLMs, fine-tuning has become possible for non-expert individuals, but intensively performed LLM fine-tuning worldwide could result in significantly high energy consumption and carbon footprint, which may bring large environmental impact.
no code implementations • 18 Sep 2023 • Reza Shirkavand, Heng Huang
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning for leveraging large graph transformer models in downstream graph based prediction tasks.
no code implementations • 15 Sep 2023 • Marinka Zitnik, Michelle M. Li, Aydin Wells, Kimberly Glass, Deisy Morselli Gysi, Arjun Krishnan, T. M. Murali, Predrag Radivojac, Sushmita Roy, Anaïs Baudot, Serdar Bozdag, Danny Z. Chen, Lenore Cowen, Kapil Devkota, Anthony Gitter, Sara Gosline, Pengfei Gu, Pietro H. Guzzi, Heng Huang, Meng Jiang, Ziynet Nesibe Kesimoglu, Mehmet Koyuturk, Jian Ma, Alexander R. Pico, Nataša Pržulj, Teresa M. Przytycka, Benjamin J. Raphael, Anna Ritz, Roded Sharan, Yang shen, Mona Singh, Donna K. Slonim, Hanghang Tong, Xinan Holly Yang, Byung-Jun Yoon, Haiyuan Yu, Tijana Milenković
As such, it is expected to help shape short- and long-term vision for future computational and algorithmic research in network biology.
1 code implementation • 6 Aug 2023 • Xidong Wu, Zhengmian Hu, Jian Pei, Heng Huang
To address the above challenge, we study the serverless multi-party collaborative AUPRC maximization problem since serverless multi-party collaborative training can cut down the communications cost by avoiding the server node bottleneck, and reformulate it as a conditional stochastic optimization problem in a serverless multi-party collaborative learning setting and propose a new ServerLess biAsed sTochastic gradiEnt (SLATE) algorithm to directly optimize the AUPRC.
1 code implementation • 27 Jul 2023 • Jianan Fan, Dongnan Liu, Hang Chang, Heng Huang, Mei Chen, Weidong Cai
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
1 code implementation • 17 Jul 2023 • Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin
It also provides 5. 7x faster training, reducing the training time for a 7B variant from 80 minutes (for Alpaca) to 14 minutes \footnote{We apply IFT for the same number of epochs as Alpaca(7B) but on fewer data, using 4$\times$NVIDIA A100 (80GB) GPUs and following the original Alpaca setting and hyperparameters.}.
1 code implementation • 16 Jul 2023 • Zhenyi Wang, Enneng Yang, Li Shen, Heng Huang
Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting.
no code implementations • CVPR 2023 • Yimu Wang, Dinghuai Zhang, Yihan Wu, Heng Huang, Hongyang Zhang
We identify a phenomenon named player domination in the bargaining game, namely that the existing max-based approaches, such as MAX and MSD, do not converge.
1 code implementation • 5 Jun 2023 • Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, Tianyi Zhou
Large language models~(LLMs) are instruction followers, but it can be challenging to find the best instruction for different situations, especially for black-box LLMs on which backpropagation is forbidden.
no code implementations • 1 Jun 2023 • Reza Shirkavand, Fei Zhang, Heng Huang
This work highlights the potential of deep learning techniques, specifically transformer-based models, in revolutionizing the healthcare industry's approach to postoperative care.
no code implementations • 25 May 2023 • Reza Shirkavand, Liang Zhan, Heng Huang, Li Shen, Paul M. Thompson
Especially in studies of brain diseases, research cohorts may include both neuroimaging data and genetic data, but for practical clinical diagnosis, we often need to make disease predictions only based on neuroimages.
no code implementations • 23 May 2023 • Wentao Bao, Lichang Chen, Heng Huang, Yu Kong
However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives (i. e., states and objects), are not properly addressed in existing CLIP-based CZSL literature.
no code implementations • 3 May 2023 • Lichang Chen, Heng Huang, Minhao Cheng
To address this critical problem, we first investigate and find that the loss landscape of vanilla prompt tuning is precipitous when it is visualized, where a slight change of input data can cause a big fluctuation in the loss landscape.
no code implementations • 3 May 2023 • Lichang Chen, Minhao Cheng, Heng Huang
Backdoor learning has become an emerging research area towards building a trustworthy machine learning system.
no code implementations • 12 Apr 2023 • Xiangyu Xu, Lichang Chen, Changjiang Cai, Huangying Zhan, Qingan Yan, Pan Ji, Junsong Yuan, Heng Huang, Yi Xu
Direct optimization of interpolated features on multi-resolution voxel grids has emerged as a more efficient alternative to MLP-like modules.
no code implementations • 6 Apr 2023 • Jiuhai Chen, Lichang Chen, Heng Huang, Tianyi Zhou
However, it is not clear whether CoT is still effective on more recent instruction finetuned (IFT) LLMs such as ChatGPT.
no code implementations • 13 Feb 2023 • Junyi Li, Feihu Huang, Heng Huang
Bilevel Optimization has witnessed notable progress recently with new emerging efficient algorithms, yet it is underexplored in the Federated Learning setting.
no code implementations • 13 Feb 2023 • Junyi Li, Feihu Huang, Heng Huang
This matches the best known rate for first-order FL algorithms and \textbf{FedDA-MVR} is the first adaptive FL algorithm that achieves this rate.
no code implementations • 8 Feb 2023 • Xidong Wu, Zhengmian Hu, Heng Huang
The minimax optimization over Riemannian manifolds (possibly nonconvex constraints) has been actively applied to solve many problems, such as robust dimensionality reduction and deep neural networks with orthogonal weights (Stiefel manifold).
no code implementations • 10 Jan 2023 • Chuan He, Heng Huang, Zhaosong Lu
In this paper we consider finding an approximate second-order stationary point (SOSP) of general nonconvex conic optimization that minimizes a twice differentiable function subject to nonlinear equality constraints and also a convex conic constraint.
1 code implementation • 9 Dec 2022 • Yihan Wu, Aleksandar Bojchevski, Heng Huang
In this paper, we extensively study this phenomenon for graph data.
no code implementations • 2 Dec 2022 • Xidong Wu, Feihu Huang, Zhengmian Hu, Heng Huang
Federated learning has attracted increasing attention with the emergence of distributed data.
1 code implementation • 19 Nov 2022 • Yihan Wu, Xinda Li, Florian Kerschbaum, Heng Huang, Hongyang Zhang
In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust.
no code implementations • 27 Oct 2022 • Heng Huang, Lin Zhao, Xintao Hu, Haixing Dai, Lu Zhang, Dajiang Zhu, Tianming Liu
Visual attention is a fundamental mechanism in the human brain, and it inspires the design of attention mechanisms in deep neural networks.
no code implementations • 25 Oct 2022 • Junyi Li, Heng Huang
Therefore, Federated Recommender (FedRec) systems are proposed to mitigate privacy concerns to non-distributed recommender systems.
no code implementations • 14 Oct 2022 • Wenhan Xian, Feihu Huang, Heng Huang
In our theoretical analysis, we prove that our new algorithm achieves a fast convergence rate of $O(\frac{1}{\sqrt{nT}} + \frac{1}{(k/d)^2 T})$ with the communication cost of $O(k \log(d))$ at each iteration.
1 code implementation • 7 Sep 2022 • Alireza Ganjdanesh, Shangqian Gao, Heng Huang
To fill in this gap, we propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process, thereby utilizing information from both inputs and outputs of the model.
no code implementations • 11 Aug 2022 • Runxue Bao, Bin Gu, Heng Huang
To address this challenge, we propose a novel accelerated doubly stochastic gradient descent (ADSGD) method for sparsity regularized loss minimization problems, which can reduce the number of block iterations by eliminating inactive coefficients during the optimization process and eventually achieve faster explicit model identification and improve the algorithm efficiency.
no code implementations • 9 Aug 2022 • Xin Jin, Qiang Deng, Jianwen Lv, Heng Huang, Hao Lou, Chaoen Xiao
The differences of the three attributes between the input images and the photography templates or the guidance images are described in natural language, which is aesthetic natural language guidance (ALG).
no code implementations • 9 Aug 2022 • Xinghui Zhou, Xin Jin, Jianwen Lv, Heng Huang, Ming Mao, Shuai Cui
In this paper, we propose aesthetic attribute assessment, which is the aesthetic attributes captioning, i. e., to assess the aesthetic attributes such as composition, lighting usage and color arrangement.
no code implementations • 14 Jul 2022 • Haoteng Tang, Guixiang Ma, Lei Guo, Xiyao Fu, Heng Huang, Liang Zhang
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks, which can be used for different prediction tasks.
no code implementations • 8 Jul 2022 • Bin Gu, Chenkang Zhang, Huan Xiong, Heng Huang
Self-paced learning is an effective method for handling noisy data.
no code implementations • 17 Jun 2022 • Yihan Wu, Hongyang Zhang, Heng Huang
The challenge is to design a provably robust algorithm that takes into consideration the 1-NN search and the high-dimensional nature of the embedding space.
no code implementations • 11 Jun 2022 • Junyi Li, Jian Pei, Heng Huang
Bilevel optimization problem is a type of optimization problem with two levels of entangled problems.
no code implementations • 6 May 2022 • Haoteng Tang, Xiyao Fu, Lei Guo, Yalin Wang, Scott Mackin, Olusola Ajilore, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan
Since brain networks derived from functional and structural MRI describe the brain topology from different perspectives, exploring a representation that combines these cross-modality brain networks is non-trivial.
no code implementations • 3 May 2022 • Junyi Li, Feihu Huang, Heng Huang
Specifically, we first propose the FedBiO, a deterministic gradient-based algorithm and we show it requires $O(\epsilon^{-2})$ number of iterations to reach an $\epsilon$-stationary point.
no code implementations • 23 Apr 2022 • Runxue Bao, Xidong Wu, Wenhan Xian, Heng Huang
To the best of our knowledge, this is the first work of distributed safe dynamic screening method.
no code implementations • 19 Mar 2022 • Qingsong Zhang, Bin Gu, Zhiyuan Dang, Cheng Deng, Heng Huang
Based on that, we propose a novel and practical VFL framework with black-box models, which is inseparably interconnected to the promising properties of ZOO.
no code implementations • CVPR 2022 • An Xu, Wenqi Li, Pengfei Guo, Dong Yang, Holger Roth, Ali Hatamizadeh, Can Zhao, Daguang Xu, Heng Huang, Ziyue Xu
In this work, we propose a novel training framework FedSM to avoid the client drift issue and successfully close the generalization gap compared with the centralized training for medical image segmentation tasks for the first time.
no code implementations • 11 Mar 2022 • Tiange Xiang, Chaoyi Zhang, Xinyi Wang, Yang song, Dongnan Liu, Heng Huang, Weidong Cai
With the backward skip connections, we propose a U-Net based network family, namely Bi-directional O-shape networks, which set new benchmarks on multiple public medical imaging segmentation datasets.
no code implementations • 23 Feb 2022 • Yihan Wu, Heng Huang, Hongyang Zhang
We prove a Lipschitzness lower bound $\Omega(\sqrt{n/p})$ of the interpolating neural network with $p$ parameters on arbitrary data distributions.
no code implementations • 10 Feb 2022 • Haozhe Jia, Chao Bai, Weidong Cai, Heng Huang, Yong Xia
In our previous work, $i. e.$, HNF-Net, high-resolution feature representation and light-weight non-local self-attention mechanism are exploited for brain tumor segmentation using multi-modal MR imaging.
1 code implementation • 6 Jan 2022 • Dongnan Liu, Chaoyi Zhang, Yang song, Heng Huang, Chenyu Wang, Michael Barnett, Weidong Cai
Recent advances in unsupervised domain adaptation (UDA) techniques have witnessed great success in cross-domain computer vision tasks, enhancing the generalization ability of data-driven deep learning architectures by bridging the domain distribution gaps.
no code implementations • CVPR 2022 • Jiexi Yan, Lei Luo, Chenghao Xu, Cheng Deng, Heng Huang
While in metric space, we utilize weakly-supervised contrastive learning to excavate these negative correlations hidden in noisy data.
no code implementations • 9 Dec 2021 • Junyi Li, Bin Gu, Heng Huang
Combining our new formulation with the alternative update of the inner and outer variables, we propose an efficient fully single loop algorithm.
no code implementations • NeurIPS 2021 • Zhengmian Hu, Feihu Huang, Heng Huang
In the paper, we study the underdamped Langevin diffusion (ULD) with strongly-convex potential consisting of finite summation of $N$ smooth components, and propose an efficient discretization method, which requires $O(N+d^\frac{1}{3}N^\frac{2}{3}/\varepsilon^\frac{2}{3})$ gradient evaluations to achieve $\varepsilon$-error (in $\sqrt{\mathbb{E}{\lVert{\cdot}\rVert_2^2}}$ distance) for approximating $d$-dimensional ULD.
no code implementations • NeurIPS 2021 • Feihu Huang, Xidong Wu, Heng Huang
For our stochastic algorithms, we first prove that the mini-batch stochastic mirror descent ascent (SMDA) method obtains a sample complexity of $O(\kappa^3\epsilon^{-4})$ for finding an $\epsilon$-stationary point, where $\kappa$ denotes the condition number.
no code implementations • NeurIPS 2021 • Wenhan Xian, Feihu Huang, yanfu Zhang, Heng Huang
We prove that our DM-HSGD algorithm achieves stochastic first-order oracle (SFO) complexity of $O(\kappa^3 \epsilon^{-3})$ for decentralized stochastic nonconvex-strongly-concave problem to search an $\epsilon$-stationary point, which improves the exiting best theoretical results.
no code implementations • NeurIPS 2021 • Hongchang Gao, Heng Huang
The stochastic compositional optimization problem covers a wide range of machine learning models, such as sparse additive models and model-agnostic meta-learning.
no code implementations • 29 Oct 2021 • Jiexi Yan, Lei Luo, Cheng Deng, Heng Huang
Since these noisy labels often cause severe performance degradation, it is crucial to enhance the robustness and generalization ability of DML.
no code implementations • 29 Sep 2021 • Yihan Wu, Heng Huang
In this paper, we boost the performance of deep metric learning (DML) models with adversarial examples generated by attacking two new objective functions: \textit{intra-class alignment} and \textit{hyperspherical uniformity}.
no code implementations • 29 Sep 2021 • Wanli Shi, Heng Huang, Bin Gu
Then, we transform the smoothed bi-level optimization to an unconstrained penalty problem by replacing the smoothed sub-problem with its first-order necessary conditions.
no code implementations • 29 Sep 2021 • Huimin Wu, Heng Huang, Bin Gu
To adapt to semi-supervised learning problems, they need to estimate labels for unlabeled data in advance, which inevitably degenerates the performance of the learned model due to the bias on the estimation of labels for unlabeled data.
no code implementations • 29 Sep 2021 • Taeuk Jang, Xiaoqian Wang, Heng Huang
To achieve this goal, we reformulate the data input by eliminating the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature.
no code implementations • 26 Sep 2021 • Qingsong Zhang, Bin Gu, Cheng Deng, Songxiang Gu, Liefeng Bo, Jian Pei, Heng Huang
To address the challenges of communication and computation resource utilization, we propose an asynchronous stochastic quasi-Newton (AsySQN) framework for VFL, under which three algorithms, i. e. AsySQN-SGD, -SVRG and -SAGA, are proposed.
no code implementations • 18 Sep 2021 • Xiyuan Wei, Bin Gu, Heng Huang
The conditional gradient algorithm (also known as the Frank-Wolfe algorithm) has recently regained popularity in the machine learning community due to its projection-free property to solve constrained problems.
no code implementations • 13 Sep 2021 • Tiange Xiang, Yang song, Chaoyi Zhang, Dongnan Liu, Mei Chen, Fan Zhang, Heng Huang, Lauren O'Donnell, Weidong Cai
With image-level labels only, patch-wise classification would be sub-optimal due to inconsistency between the patch appearance and image-level label.
no code implementations • 9 Aug 2021 • Haoteng Tang, Haozhe Jia, Weidong Cai, Heng Huang, Yong Xia, Liang Zhan
In this paper, we propose a Boundary-aware Graph Reasoning (BGR) module to learn long-range contextual features for semantic segmentation.
no code implementations • 9 Aug 2021 • Haozhe Jia, Haoteng Tang, Guixiang Ma, Weidong Cai, Heng Huang, Liang Zhan, Yong Xia
In the PSGR module, a graph is first constructed by projecting each pixel on a node based on the features produced by the segmentation backbone, and then converted into a sparsely-connected graph by keeping only K strongest connections to each uncertain pixel.
no code implementations • 26 Jul 2021 • Feihu Huang, Junyi Li, Shangqian Gao, Heng Huang
Specifically, we propose a bilevel optimization method based on Bregman distance (BiO-BreD) to solve deterministic bilevel problems, which achieves a lower computational complexity than the best known results.
3 code implementations • 8 Jul 2021 • Bo Liu, Chaowei Tan, Jiazhou Wang, Tao Zeng, Huasong Shan, Houpu Yao, Heng Huang, Peng Dai, Liefeng Bo, Yanqing Chen
We use this platform to demonstrate our research and development results on privacy preserving machine learning algorithms.
1 code implementation • 26 Jun 2021 • Xinyi Wang, Tiange Xiang, Chaoyi Zhang, Yang song, Dongnan Liu, Heng Huang, Weidong Cai
We evaluate BiX-NAS on two segmentation tasks using three different medical image datasets, and the experimental results show that our BiX-NAS searched architecture achieves the state-of-the-art performance with significantly lower computational cost.
1 code implementation • ICLR 2022 • Feihu Huang, Shangqian Gao, Heng Huang
In the paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques.
1 code implementation • CVPR 2021 • Shangqian Gao, Feihu Huang, Weidong Cai, Heng Huang
Specifically, we train a stand-alone neural network to predict sub-networks' performance and then maximize the output of the network as a proxy of accuracy to guide pruning.
1 code implementation • CVPR 2021 • Zhiyuan Dang, Cheng Deng, Xu Yang, Kun Wei, Heng Huang
Specifically, for the local level, we match the nearest neighbors based on batch embedded features, as for the global one, we match neighbors from overall embedded features.
no code implementations • CVPR 2021 • Jiexi Yan, Lei Luo, Cheng Deng, Heng Huang
Learning feature embedding directly from images without any human supervision is a very challenging and essential task in the field of computer vision and machine learning.
1 code implementation • NeurIPS 2021 • Feihu Huang, Junyi Li, Heng Huang
To fill this gap, we propose a faster and universal framework of adaptive gradients (i. e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms.
no code implementations • NeurIPS 2021 • Hongchang Gao, Heng Huang
The stochastic compositional optimization problem covers a wide range of machine learning models, such as sparse additive models and model-agnostic meta-learning.
no code implementations • 9 Apr 2021 • Zhou Zhai, Bin Gu, Heng Huang
To explore this problem, in this paper, we propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
1 code implementation • 9 Mar 2021 • Zhiyuan Dang, Cheng Deng, Xu Yang, Heng Huang
In this paper, we present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views to obtain more discriminative features and competitive results.
1 code implementation • 4 Mar 2021 • Guanghan Ning, Guang Chen, Chaowei Tan, Si Luo, Liefeng Bo, Heng Huang
We propose a new offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
no code implementations • 1 Mar 2021 • Qingsong Zhang, Bin Gu, Cheng Deng, Heng Huang
Vertical federated learning (VFL) attracts increasing attention due to the emerging demands of multi-party collaborative modeling and concerns of privacy leakage.
1 code implementation • 17 Feb 2021 • Bin Gu, Guodong Liu, yanfu Zhang, Xiang Geng, Heng Huang
Modern machine learning algorithms usually involve tuning multiple (from one to thousands) hyperparameters which play a pivotal role in terms of model generalizability.
no code implementations • 9 Feb 2021 • Zhengmian Hu, Feihu Huang, Heng Huang
Moreover, our HMC methods with biased gradient estimators, such as SARAH and SARGE, require $\tilde{O}(N+\sqrt{N} \kappa^2 d^{\frac{1}{2}} \varepsilon^{-1})$ gradient complexity, which has the same dependency on condition number $\kappa$ and dimension $d$ as full gradient method, but improves the dependency of sample size $N$ for a factor of $N^\frac{1}{2}$.
no code implementations • 8 Feb 2021 • An Xu, Heng Huang
In this work, we propose a new method to improve the training performance in cross-silo FL via maintaining double momentum buffers.
no code implementations • 3 Feb 2021 • Liangxi Liu, Feng Zheng, Hong Chen, Guo-Jun Qi, Heng Huang, Ling Shao
On the client side, a prior loss that uses the global posterior probabilistic parameters delivered from the server is designed to guide the local training.
no code implementations • ICCV 2021 • Chao Li, Shangqian Gao, Cheng Deng, Wei Liu, Heng Huang
Specifically, given a target model, we first construct its substitute model to exploit cross-modal correlations within hamming space, with which we create adversarial examples by limitedly querying from a target model.
no code implementations • ICCV 2021 • yanfu Zhang, Shangqian Gao, Heng Huang
In this paper, we focus on the discrimination-aware compression of Convolutional Neural Networks (CNNs).
no code implementations • 1 Jan 2021 • An Xu, Xiao Yan, Hongchang Gao, Heng Huang
The heavy communication for model synchronization is a major bottleneck for scaling up the distributed deep neural network training to many workers.
no code implementations • ICCV 2021 • yanfu Zhang, Lei Luo, Wenhan Xian, Heng Huang
However, pair-wise methods involve expensive training costs, while proxy-based methods are less accurate in characterizing the relationships between data points.
no code implementations • 1 Jan 2021 • Shangqian Gao, Feihu Huang, Heng Huang
In this paper, we propose a novel channel pruning method to solve the problem of compression and acceleration of Convolutional Neural Networks (CNNs).
no code implementations • 30 Dec 2020 • Haozhe Jia, Weidong Cai, Heng Huang, Yong Xia
In this paper, we propose a Hybrid High-resolution and Non-local Feature Network (H2NF-Net) to segment brain tumor in multimodal MR images.
no code implementations • 10 Dec 2020 • Haoteng Tang, Guixiang Ma, Lifang He, Heng Huang, Liang Zhan
In this paper, we propose a new interpretable graph pooling framework - CommPOOL, that can capture and preserve the hierarchical community structure of graphs in the graph representation learning process.
no code implementations • 15 Oct 2020 • Xin Jin, Xiqiao Li, Heng Huang, XiaoDong Li, Xinghui Zhou
In this paper, we propose a Deep Drift-Diffusion (DDD) model inspired by psychologists to predict aesthetic score distribution from images.
1 code implementation • 11 Sep 2020 • Dongnan Liu, Donghao Zhang, Yang song, Fan Zhang, Lauren O'Donnell, Heng Huang, Mei Chen, Weidong Cai
In this work, we present an unsupervised domain adaptation (UDA) method, named Panoptic Domain Adaptive Mask R-CNN (PDAM), for unsupervised instance segmentation in microscopy images.
no code implementations • 1 Sep 2020 • Junyi Li, Bin Gu, Heng Huang
In this paper, we propose an improved bilevel model which converges faster and better compared to the current formulation.
no code implementations • 24 Aug 2020 • Hongchang Gao, Heng Huang
To the best of our knowledge, this is the first adaptive decentralized training approach.
no code implementations • 24 Aug 2020 • Hongchang Gao, Heng Huang
The condition for achieving the linear speedup is also provided for this variant.
no code implementations • 18 Aug 2020 • Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
Our Acc-MDA achieves a low gradient complexity of $\tilde{O}(\kappa_y^{4. 5}\epsilon^{-3})$ without requiring large batches for finding an $\epsilon$-stationary point.
no code implementations • 14 Aug 2020 • Bin Gu, An Xu, Zhouyuan Huo, Cheng Deng, Heng Huang
To the best of our knowledge, AFSGD-VP and its SVRG and SAGA variants are the first asynchronous federated learning algorithms for vertically partitioned data.
no code implementations • 14 Aug 2020 • Bin Gu, Zhiyuan Dang, Xiang Li, Heng Huang
In this paper, we focus on nonlinear learning with kernels, and propose a federated doubly stochastic kernel learning (FDSKL) algorithm for vertically partitioned data.
no code implementations • 13 Aug 2020 • An Xu, Zhouyuan Huo, Heng Huang
Both our theoretical and empirical results show that our new methods can handle the "gradient mismatch" problem.
no code implementations • 4 Aug 2020 • Feihu Huang, Songcan Chen, Heng Huang
Our theoretical analysis shows that the online SPIDER-ADMM has the IFO complexity of $\mathcal{O}(\epsilon^{-\frac{3}{2}})$, which improves the existing best results by a factor of $\mathcal{O}(\epsilon^{-\frac{1}{2}})$.
1 code implementation • ICML 2020 • Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
In particular, we present a non-adaptive version of IS-MBPG method, i. e., IS-MBPG*, which also reaches the best known sample complexity of $O(\epsilon^{-3})$ without any large batches.
1 code implementation • 1 Jul 2020 • Tiange Xiang, Chaoyi Zhang, Dongnan Liu, Yang song, Heng Huang, Weidong Cai
U-Net has become one of the state-of-the-art deep learning-based approaches for modern computer vision tasks such as semantic segmentation, super resolution, image denoising, and inpainting.
1 code implementation • 29 Jun 2020 • Runxue Bao, Bin Gu, Heng Huang
Moreover, we prove that the algorithms with our screening rule are guaranteed to have identical results with the original algorithms.
no code implementations • 17 Jun 2020 • Junyi Li, Heng Huang
Due to the rising privacy demand in data mining, Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
1 code implementation • CVPR 2020 • Dongnan Liu, Donghao Zhang, Yang song, Fan Zhang, Lauren O'Donnell, Heng Huang, Mei Chen, Weidong Cai
More specifically, we first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images.
no code implementations • 11 Apr 2020 • An Xu, Heng Huang
To tackle this important issue, we improve the communication-efficient distributed SGD from a novel aspect, that is, the trade-off between the variance and second moment of the gradient.
no code implementations • 25 Feb 2020 • An Xu, Zhouyuan Huo, Heng Huang
The communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
1 code implementation • 15 Feb 2020 • Dongnan Liu, Donghao Zhang, Yang song, Heng Huang, Weidong Cai
Specifically, our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features, in order to facilitate the semantic contextual information learning in the instance branch.
no code implementations • 4 Feb 2020 • Zhouyuan Huo, Bin Gu, Heng Huang
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
no code implementations • 24 Dec 2019 • Wanli Shi, Bin Gu, Xinag Li, Heng Huang
Semi-supervised ordinal regression (S$^2$OR) problems are ubiquitous in real-world applications, where only a few ordered instances are labeled and massive instances remain unlabeled.
no code implementations • 24 Dec 2019 • Zhou Zhai, Bin Gu, Xiang Li, Heng Huang
To address this challenge, in this paper, we propose two safe sample screening rules for RSVM based on the framework of concave-convex procedure (CCCP).
no code implementations • 14 Dec 2019 • Heng Wang, Donghao Zhang, Yang song, Heng Huang, Mei Chen, Weidong Cai
Our contribution consists of the proposal of a significant task worth investigating and a naive baseline of solving it.
1 code implementation • NeurIPS 2019 • Shuo Chen, Lei Luo, Jian Yang, Chen Gong, Jun Li, Heng Huang
To address this issue, we first reveal that the traditional linear distance metric is equivalent to the cumulative arc length between the data pair's nearest points on the learned straight measurer lines.
no code implementations • 9 Oct 2019 • Zhouyuan Huo, Heng Huang
Recently, reducing communication time between machines becomes the main focus of distributed data mining.
no code implementations • 25 Sep 2019 • Hongchang Gao, Gang Wu, Ryan Rossi, Viswanathan Swaminathan, Heng Huang
Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with high-dimensional sparse data.
1 code implementation • NeurIPS 2019 • Qian Yang, Zhouyuan Huo, Wenlin Wang, Heng Huang, Lawrence Carin
Model parallelism is required if a model is too large to fit in a single computing device.
no code implementations • 6 Sep 2019 • Xiaoqian Wang, Heng Huang
In order to achieve this goal, we reformulate the data input by removing the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature.
no code implementations • CVPR 2020 • An Xu, Zhouyuan Huo, Heng Huang
Training the deep convolutional neural network for computer vision problems is slow and inefficient, especially when it is large and distributed across multiple devices.
no code implementations • 30 Jul 2019 • Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
Zeroth-order methods powerful optimization tools for solving many machine learning problems because it only need function values (not gradient) in the optimization.
no code implementations • 29 Jul 2019 • Wanli Shi, Bin Gu, Xiang Li, Xiang Geng, Heng Huang
To address this problem, in this paper, we propose a novel scalable quadruply stochastic gradient algorithm (QSG-S2AUC) for nonlinear semi-supervised AUC optimization.
no code implementations • 26 Jul 2019 • Xiang Geng, Bin Gu, Xiang Li, Wanli Shi, Guansheng Zheng, Heng Huang
Specifically, to handle two types of data instances involved in S$^3$VM, TSGS$^3$VM samples a labeled instance and an unlabeled instance as well with the random features in each iteration to compute a triply stochastic gradient.
no code implementations • 2 Jul 2019 • Feiping Nie, Zhanxuan Hu, Xiaoqian Wang, Rong Wang, Xuelong. Li, Heng Huang
This work aims at solving the problems with intractable sparsity-inducing norms that are often encountered in various machine learning tasks, such as multi-task learning, subspace clustering, feature selection, robust principal component analysis, and so on.
no code implementations • 29 Jun 2019 • Feiping Nie, Hua Wang, Zheng Wang, Heng Huang
In this paper, we propose a novel robust linear discriminant analysis method based on the L1, 2-norm ratio minimization.
no code implementations • 29 May 2019 • Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang
In particular, our methods not only reach the best convergence rate $O(1/T)$ for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints.
2 code implementations • 7 May 2019 • Guanghan Ning, Heng Huang
To the best of our knowledge, this is the first paper to propose an online human pose tracking framework in a top-down fashion.
Ranked #3 on
Pose Tracking
on PoseTrack2017
1 code implementation • CVPR 2019 • Chenyou Fan, Xiaofan Zhang, Shu Zhang, Wensheng Wang, Chi Zhang, Heng Huang
In this paper, we propose a novel end-to-end trainable Video Question Answering (VideoQA) framework with three major components: 1) a new heterogeneous memory which can effectively learn global context information from appearance and motion features; 2) a redesigned question memory which helps understand the complex semantics of question and highlights queried subjects; and 3) a new multimodal fusion layer which performs multi-step reasoning by attending to relevant visual and textual hints with self-updated attention.
Ranked #29 on
Visual Question Answering (VQA)
on MSRVTT-QA
no code implementations • 16 Feb 2019 • Feihu Huang, Bin Gu, Zhouyuan Huo, Songcan Chen, Heng Huang
Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems.
no code implementations • NeurIPS 2018 • Jie Xu, Lei Luo, Cheng Deng, Heng Huang
Metric learning, aiming to learn a discriminative Mahalanobis distance matrix M that can effectively reflect the similarity between data samples, has been widely studied in various image recognition problems.
no code implementations • MICCAI 2018 2018 • Donghao Zhang, Yang song, Dongnan Liu, Haozhe Jia, Si-Qi Liu, Yong Xia, Heng Huang, Weidong Cai
The morphological clues of various cancer cells are essential for pathologists to determine the stages of cancers.
Ranked #1 on
Nuclear Segmentation
on Cell17
no code implementations • 18 Jul 2018 • Haozhe Jia, Yang song, Donghao Zhang, Heng Huang, Dagan Feng, Michael Fulham, Yong Xia, Weidong Cai
In this paper, we propose a 3D Global Convolutional Adversarial Network (3D GCA-Net) to address efficient prostate MR volume segmentation.
no code implementations • NeurIPS 2018 • Zhouyuan Huo, Bin Gu, Heng Huang
Training a neural network using backpropagation algorithm requires passing error gradients sequentially through the network.
no code implementations • ICML 2018 • Bin Gu, Zhouyuan Huo, Cheng Deng, Heng Huang
Asynchronous parallel stochastic gradient optimization has been playing a pivotal role to solve large-scale machine learning problems in big data applications.
no code implementations • CVPR 2018 • Kamran Ghasedi Dizaji, Feng Zheng, Najmeh Sadoughi, Yanhua Yang, Cheng Deng, Heng Huang
HashGAN consists of three networks, a generator, a discriminator and an encoder.
no code implementations • CVPR 2018 • Xin Miao, Xian-Tong Zhen, Xianglong Liu, Cheng Deng, Vassilis Athitsos, Heng Huang
In this paper, we propose the direct shape regression network (DSRN) for end-to-end face alignment by jointly handling the aforementioned challenges in a unified framework.
Ranked #15 on
Face Alignment
on AFLW-19
3 code implementations • ICML 2018 • Zhouyuan Huo, Bin Gu, Qian Yang, Heng Huang
The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources.
no code implementations • NeurIPS 2017 • Hong Chen, Xiaoqian Wang, Cheng Deng, Heng Huang
Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection.
no code implementations • NeurIPS 2017 • Xiaoqian Wang, Hong Chen, Weidong Cai, Dinggang Shen, Heng Huang
Linear regression models have been successfully used to function estimation and model selection in high-dimensional data analysis.
no code implementations • NeurIPS 2017 • Feiping Nie, Xiaoqian Wang, Cheng Deng, Heng Huang
In graph based co-clustering methods, a bipartite graph is constructed to depict the relation between features and samples.
no code implementations • 10 Nov 2017 • Zhouyuan Huo, Bin Gu, Ji Liu, Heng Huang
To the best of our knowledge, our method admits the fastest convergence rate for stochastic composition optimization: for strongly convex composition problem, our algorithm is proved to admit linear convergence; for general composition problem, our algorithm significantly improves the state-of-the-art convergence rate from $O(T^{-1/2})$ to $O((n_1+n_2)^{{2}/{3}}T^{-1})$.
no code implementations • 31 Oct 2017 • Zhe Guo, Xiang Li, Heng Huang, Ning Guo, Quanzheng Li
Image analysis using more than one modality (i. e. multi-modal) has been increasingly applied in the field of biomedical imaging.
no code implementations • 24 Oct 2017 • Milad Makkie, Heng Huang, Yu Zhao, Athanasios V. Vasilakos, Tianming Liu
In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks.
no code implementations • ICCV 2017 • Yang Song, Fan Zhang, Qing Li, Heng Huang, Lauren J. O'Donnell, Weidong Cai
Texture classification has been extensively studied in computer vision.
1 code implementation • ICCV 2017 • Kamran Ghasedi Dizaji, Amirhossein Herandi, Cheng Deng, Weidong Cai, Heng Huang
We define a clustering objective function using relative entropy (KL divergence) minimization, regularized by a prior for the frequency of cluster assignments.
Ranked #1 on
Image Clustering
on FRGC
no code implementations • 18 Dec 2016 • Bin Gu, De Wang, Zhouyuan Huo, Heng Huang
The theoretical results show that our inexact proximal gradient algorithms can have the same convergence rates as the ones of exact proximal gradient algorithms in the non-convex setting.
no code implementations • 5 Dec 2016 • Bin Gu, Zhouyuan Huo, Heng Huang
The convergence rate of existing asynchronous doubly stochastic zeroth order algorithms is $O(\frac{1}{\sqrt{T}})$ (also for the sequential stochastic zeroth-order optimization algorithms).
no code implementations • NeurIPS 2016 • Hong Chen, Haifeng Xia, Heng Huang, Weidong Cai
Nystr\"{o}m method has been used successfully to improve the computational efficiency of kernel ridge regression (KRR).
no code implementations • 29 Oct 2016 • Bin Gu, Zhouyuan Huo, Heng Huang
In this paper, we focus on a composite objective function consisting of a smooth convex function $f$ and a block separable convex function, which widely exists in machine learning and computer vision.
no code implementations • 14 Oct 2016 • Shuai Zheng, Xiao Cai, Chris Ding, Feiping Nie, Heng Huang
Real life data often includes information from different channels.
no code implementations • 14 Oct 2016 • Shuai Zheng, Feiping Nie, Chris Ding, Heng Huang
In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix.
no code implementations • 22 Sep 2016 • Zhouyuan Huo, Bin Gu, Heng Huang
In this paper, we propose a faster method, decoupled asynchronous proximal stochastic variance reduced gradient descent method (DAP-SVRG).
no code implementations • 29 May 2016 • Zhouyuan Huo, Heng Huang
Our method does not need the dual formulation of the target problem in the optimization.
no code implementations • 12 Apr 2016 • Zhouyuan Huo, Heng Huang
We provide the first theoretical analysis on the convergence rate of the asynchronous stochastic variance reduced gradient (SVRG) descent algorithm on non-convex optimization.
no code implementations • 30 Mar 2016 • Peng Li, Heng Huang
We report an implementation of a clinical information extraction tool that leverages deep neural network to annotate event spans and their attributes from raw clinical notes and pathology reports.
no code implementations • 30 Mar 2016 • Peng Li, Heng Huang
Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs.
no code implementations • 28 Mar 2016 • Feiping Nie, Heng Huang
In this paper, we propose to maximize the L21-norm based robust PCA objective, which is theoretically connected to the minimization of reconstruction error.
no code implementations • ICCV 2015 • Hongchang Gao, Feiping Nie, Xuelong. Li, Heng Huang
In this paper, we propose a novel multi-view subspace clustering method.
no code implementations • 5 Sep 2015 • Wenhao Jiang, Cheng Deng, Wei Liu, Feiping Nie, Fu-Lai Chung, Heng Huang
Domain adaptation problems arise in a variety of applications, where a training dataset from the \textit{source} domain and a test dataset from the \textit{target} domain typically follow different distributions.
no code implementations • CVPR 2015 • Yang Song, Weidong Cai, Qing Li, Fan Zhang, David Dagan Feng, Heng Huang
Texture, as a fundamental characteristic of objects, has attracted much attention in computer vision research.
no code implementations • 23 Nov 2014 • Xiaojun Chang, Feiping Nie, Yi Yang, Heng Huang
Our algorithm is built upon two advancements of the state of the art:1) label propagation, which propagates a node\'s labels to neighboring nodes according to their proximity; and 2) manifold learning, which has been widely used in its capacity to leverage the manifold structure of data points.
no code implementations • 23 Nov 2014 • Xiaojun Chang, Feiping Nie, Yi Yang, Heng Huang
In addition, based on the sparse model used in CSPCA, an optimal weight is assigned to each of the original feature, which in turn provides the output with good interpretability.
no code implementations • CVPR 2014 • Dijun Luo, Heng Huang
After that, we employ an embedded manifold denoising approach with the adaptive kernel to segment the motion of rigid and non-rigid objects.
no code implementations • CVPR 2013 • Hua Wang, Feiping Nie, Heng Huang, Chris Ding
We applied our SMML method to five broadly used object categorization and scene understanding image data sets for both singlelabel and multi-label image classification tasks.
no code implementations • NeurIPS 2012 • Dijun Luo, Heng Huang, Feiping Nie, Chris H. Ding
In many graph-based machine learning and data mining approaches, the quality of the graph is critical.
no code implementations • NeurIPS 2012 • Hua Wang, Feiping Nie, Heng Huang, Jingwen Yan, Sungeun Kim, Shannon Risacher, Andrew Saykin, Li Shen
Alzheimer disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions.
no code implementations • NeurIPS 2010 • Feiping Nie, Heng Huang, Xiao Cai, Chris H. Ding
The ℓ2, 1-norm based loss function is robust to outliers in data points and the ℓ2, 1-norm regularization selects features across all data points with joint sparsity.