Search Results for author: Haoyi Xiong

Found 49 papers, 16 papers with code

A Comparative Survey of Deep Active Learning

no code implementations25 Mar 2022 Xueying Zhan, Qingzhong Wang, Kuan-Hao Huang, Haoyi Xiong, Dejing Dou, Antoni B. Chan

Active Learning (AL) is a set of techniques for reducing labeling cost by sequentially selecting data samples from a large unlabeled data pool for labeling.

Active Learning

PP-HumanSeg: Connectivity-Aware Portrait Segmentation with a Large-Scale Teleconferencing Video Dataset

1 code implementation14 Dec 2021 Lutao Chu, Yi Liu, Zewu Wu, Shiyu Tang, Guowei Chen, Yuying Hao, Juncai Peng, Zhiliang Yu, Zeyu Chen, Baohua Lai, Haoyi Xiong

This work is the first to construct a large-scale video portrait dataset that contains 291 videos from 23 conference scenes with 14K fine-labeled frames and extensions to multi-camera teleconferencing.

Portrait Segmentation Semantic Segmentation

SenseMag: Enabling Low-Cost Traffic Monitoring using Non-invasive Magnetic Sensing

no code implementations24 Oct 2021 Kafeng Wang, Haoyi Xiong, Jie Zhang, Hongyang Chen, Dejing Dou, Cheng-Zhong Xu

Extensive experiment based on real-word field deployment (on the highways in Shenzhen, China) shows that SenseMag significantly outperforms the existing methods in both classification accuracy and the granularity of vehicle types (i. e., 7 types by SenseMag versus 4 types by the existing work in comparisons).

AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow

no code implementations7 Oct 2021 Haiyan Jiang, Haoyi Xiong, Dongrui Wu, Ji Liu, Dejing Dou

Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.

Dimensionality Reduction Model Selection

Exploring the Common Principal Subspace of Deep Features in Neural Networks

no code implementations6 Oct 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.

Image Reconstruction Self-Supervised Learning

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Generalized Tasks

no code implementations29 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Yang Cao, Yu Kang, Haifeng Wang

While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs).


AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators

1 code implementation21 Sep 2021 Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, Xiang Zhang

Most of the existing contrastive learning methods employ pre-defined view generation methods, e. g., node drop or edge perturbation, which usually cannot adapt to input data or preserve the original semantic structures well.

Contrastive Learning Graph Representation Learning +3

Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning

no code implementations8 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang, Haifeng Wang

In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating their connection weights based on their observations, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e. g., meta-learning.


Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

no code implementations2 Sep 2021 Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou

Existing interpretation algorithms have found that, even deep models make the same and right predictions on the same image, they might rely on different sets of input features for classification.

Image Classification Semantic Segmentation +1

Semi-Supervised Active Learning with Temporal Output Discrepancy

1 code implementation ICCV 2021 Siyu Huang, Tianyang Wang, Haoyi Xiong, Jun Huan, Dejing Dou

To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset.

Active Learning Image Classification +1

Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity

1 code implementation21 Jul 2021 Shuangli Li, Jingbo Zhou, Tong Xu, Liang Huang, Fan Wang, Haoyi Xiong, Weili Huang, Dejing Dou, Hui Xiong

To this end, we propose a structure-aware interactive graph neural network (SIGN) which consists of two components: polar-inspired graph attention layers (PGAL) and pairwise interactive pooling (PiPool).

Drug Discovery Graph Attention

Face.evoLVe: A High-Performance Face Recognition Library

1 code implementation19 Jul 2021 Qingzhong Wang, Pengfei Zhang, Haoyi Xiong, Jian Zhao

In this paper, we develop face. evoLVe -- a comprehensive library that collects and implements a wide range of popular deep learning-based methods for face recognition.

Face Alignment Face Recognition

From Personalized Medicine to Population Health: A Survey of mHealth Sensing Techniques

no code implementations2 Jul 2021 Zhiyuan Wang, Haoyi Xiong, Jie Zhang, Sijia Yang, Mehdi Boukhechba, Laura E. Barnes, Daqing Zhang, Dejing Dou

Mobile Sensing Apps have been widely used as a practical approach to collect behavioral and health-related information from individuals and provide timely intervention to promote health and well-beings, such as mental health and chronic cares.

Robust Matrix Factorization with Grouping Effect

1 code implementation25 Jun 2021 Haiyan Jiang, Shuyu Li, Luwei Zhang, Haoyi Xiong, Dejing Dou

Compared with existing algorithms, the proposed GRMF can automatically learn the grouping structure and sparsity in MF without prior knowledge, by introducing a naturally adjustable non-convex regularization to achieve simultaneous sparsity and grouping effect.


Practical Assessment of Generalization Performance Robustness for Deep Networks via Contrastive Examples

no code implementations20 Jun 2021 Xuanyu Wu, Xuhong LI, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou

Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the generalization performance of deep models in complement with a testing set.

Contrastive Learning

JIZHI: A Fast and Cost-Effective Model-As-A-Service System for Web-Scale Online Inference at Baidu

1 code implementation3 Jun 2021 Hao liu, Qian Gao, Jiang Li, Xiaochao Liao, Hao Xiong, Guangxing Chen, Wenlin Wang, Guobao Yang, Zhiwei Zha, daxiang dong, Dejing Dou, Haoyi Xiong

In this work, we present JIZHI - a Model-as-a-Service system - that per second handles hundreds of millions of online inference requests to huge deep models with more than trillions of sparse parameters, for over twenty real-time recommendation services at Baidu, Inc.

Recommendation Systems

Optimization Variance: Exploring Generalization Properties of DNNs

1 code implementation3 Jun 2021 Xiao Zhang, Dongrui Wu, Haoyi Xiong, Bo Dai

Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent.

Learning Theory

From Distributed Machine Learning to Federated Learning: A Survey

no code implementations29 Apr 2021 Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou

Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.

Federated Learning

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning

no code implementations25 Mar 2021 Xingjian Li, Haoyi Xiong, Chengzhong Xu, Dejing Dou

Performing mixup for transfer learning with pre-trained models however is not that simple, a high capacity pre-trained model with a large fully-connected (FC) layer could easily overfit to the target dataset even with samples-to-labels mixed up.

Transfer Learning

Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond

1 code implementation19 Mar 2021 Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou

Then, to understand the results of interpretation, we also survey the performance metrics for evaluating interpretation algorithms.

Adversarial Robustness

Democratizing Evaluation of Deep Model Interpretability through Consensus

no code implementations1 Jan 2021 Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Yanjie Fu, Dejing Dou

Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e. g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting.

Feature Importance

Implicit Regularization Effects of Unbiased Random Label Noises with SGD

no code implementations1 Jan 2021 Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu

Random label noises (or observational noises) widely exist in practical machinelearning settings.

Empirical Studies on the Convergence of Feature Spaces in Deep Learning

no code implementations1 Jan 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.

Image Reconstruction Self-Supervised Learning

Can We Use Gradient Norm as a Measure of Generalization Error for Model Selection in Practice?

no code implementations1 Jan 2021 Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu

The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.

Model Selection

C-Watcher: A Framework for Early Detection of High-Risk Neighborhoods Ahead of COVID-19 Outbreak

no code implementations22 Dec 2020 Congxi Xiao, Jingbo Zhou, Jizhou Huang, An Zhuo, Ji Liu, Haoyi Xiong, Dejing Dou

Furthermore, to transfer the firsthand knowledge (witted in epicenters) to the target city before local outbreaks, we adopt a novel adversarial encoder framework to learn "city-invariant" representations from the mobility-related features for precise early detection of high-risk neighborhoods, even before any confirmed cases known, in the target city.

Distance-aware Molecule Graph Attention Network for Drug-Target Binding Affinity Prediction

1 code implementation17 Dec 2020 Jingbo Zhou, Shuangli Li, Liang Huang, Haoyi Xiong, Fan Wang, Tong Xu, Hui Xiong, Dejing Dou

The hierarchical attentive aggregation can capture spatial dependencies among atoms, as well as fuse the position-enhanced information with the capability of discriminating multiple spatial relations among atoms.

Drug Discovery Graph Attention +1

Towards Accurate Knowledge Transfer via Target-awareness Representation Disentanglement

no code implementations16 Oct 2020 Xingjian Li, Di Hu, Xuhong LI, Haoyi Xiong, Zhi Ye, Zhipeng Wang, Chengzhong Xu, Dejing Dou

Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples.

Disentanglement Transfer Learning

XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-domain Mixup

no code implementations20 Jul 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou

While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy.

Transfer Learning

Generating Person Images with Appearance-aware Pose Stylizer

1 code implementation17 Jul 2020 Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou

Generation of high-quality person images is challenging, due to the sophisticated entanglements among image factors, e. g., appearance, pose, foreground, background, local details, global structures, etc.

Image Generation

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

1 code implementation ICML 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou

RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning, while the effects of randomization can be easily converged throughout the overall learning procedure.

Transfer Learning

Quantifying the Economic Impact of COVID-19 in Mainland China Using Human Mobility Data

no code implementations6 May 2020 Jizhou Huang, Haifeng Wang, Haoyi Xiong, Miao Fan, An Zhuo, Ying Li, Dejing Dou

While these strategies have effectively dealt with the critical situations of outbreaks, the combination of the pandemic and mobility controls has slowed China's economic growth, resulting in the first quarterly decline of Gross Domestic Product (GDP) since GDP began to be calculated, in 1992.

Rethink the Connections among Generalization, Memorization and the Spectral Bias of DNNs

1 code implementation29 Apr 2020 Xiao Zhang, Haoyi Xiong, Dongrui Wu

Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance, challenging the bias-variance trade-off in classical learning theory.

Learning Theory

COLAM: Co-Learning of Deep Neural Networks and Soft Labels via Alternating Minimization

no code implementations26 Apr 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Dejing Dou, Chengzhong Xu

Softening labels of training datasets with respect to data representations has been frequently used to improve the training of deep neural networks (DNNs).

General Classification

Parameter-Free Style Projection for Arbitrary Style Transfer

1 code implementation17 Mar 2020 Siyu Huang, Haoyi Xiong, Tianyang Wang, Bihan Wen, Qingzhong Wang, Zeyu Chen, Jun Huan, Dejing Dou

This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer, which includes a regularization term for matching the semantics between input contents and stylized outputs.

Style Transfer

Curriculum Audiovisual Learning

no code implementations26 Jan 2020 Di Hu, Zheng Wang, Haoyi Xiong, Dong Wang, Feiping Nie, Dejing Dou

Associating sound and its producer in complex audiovisual scene is a challenging task, especially when we are lack of annotated training data.

Ultrafast Photorealistic Style Transfer via Neural Architecture Search

no code implementations5 Dec 2019 Jie An, Haoyi Xiong, Jun Huan, Jiebo Luo

Our method consists of a construction step (C-step) to build a photorealistic stylization network and a pruning step (P-step) for acceleration.

Network Pruning Neural Architecture Search +1

SecureGBM: Secure Multi-Party Gradient Boosting

no code implementations27 Nov 2019 Zhi Fengy, Haoyi Xiong, Chuanyuan Song, Sijia Yang, Baoxin Zhao, Licheng Wang, Zeyu Chen, Shengwen Yang, Li-Ping Liu, Jun Huan

Our experiments using the real-world data showed that SecureGBM can well secure the communication and computation of LightGBM training and inference procedures for the both parties while only losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets.

Towards Making Deep Transfer Learning Never Hurt

no code implementations18 Nov 2019 Ruosi Wan, Haoyi Xiong, Xingjian Li, Zhanxing Zhu, Jun Huan

The empirical results show that the proposed descent direction estimation strategy DTNH can always improve the performance of deep transfer learning tasks based on all above regularizers, even when transferring pre-trained weights from inappropriate networks.

Knowledge Distillation Transfer Learning

Fast Universal Style Transfer for Artistic and Photorealistic Rendering

no code implementations6 Jul 2019 Jie An, Haoyi Xiong, Jiebo Luo, Jun Huan, Jinwen Ma

Given a pair of images as the source of content and the reference of style, existing solutions usually first train an auto-encoder (AE) to reconstruct the image using deep features and then embeds pre-defined style transfer modules into the AE reconstruction procedure to transfer the style of the reconstructed image through modifying the deep features.

Style Transfer

On the Noisy Gradient Descent that Generalizes as SGD

1 code implementation ICML 2020 Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, Zhanxing Zhu

The gradient noise of SGD is considered to play a central role in the observed strong generalization abilities of deep learning.

StyleNAS: An Empirical Study of Neural Architecture Search to Uncover Surprisingly Fast End-to-End Universal Style Transfer Networks

no code implementations6 Jun 2019 Jie An, Haoyi Xiong, Jinwen Ma, Jiebo Luo, Jun Huan

Finally compared to existing universal style transfer networks for photorealistic rendering such as PhotoWCT that stacks multiple well-trained auto-encoders and WCT transforms in a non-end-to-end manner, the architectures designed by StyleNAS produce better style-transferred images with details preserving, using a tiny number of operators/parameters, and enjoying around 500x inference time speed-up.

Image Classification Neural Architecture Search +3

SHE2: Stochastic Hamiltonian Exploration and Exploitation for Derivative-Free Optimization

no code implementations ICLR 2019 Haoyi Xiong, Wenqing Hu, Zhanxing Zhu, Xinjian Li, Yunchao Zhang, Jun Huan

Derivative-free optimization (DFO) using trust region methods is frequently used for machine learning applications, such as (hyper-)parameter optimization without the derivatives of objective functions known.

Text-to-Image Generation

Quasi-potential as an implicit regularizer for the loss function in the stochastic gradient descent

no code implementations18 Jan 2019 Wenqing Hu, Zhanxing Zhu, Haoyi Xiong, Jun Huan

We show in this case that the quasi-potential function is related to the noise covariance structure of SGD via a partial differential equation of Hamilton-Jacobi type.

Variational Inference

Neural Control Variates for Variance Reduction

no code implementations1 Jun 2018 Ruosi Wan, Mingjun Zhong, Haoyi Xiong, Zhanxing Zhu

In statistics and machine learning, approximation of an intractable integration is often achieved by using the unbiased Monte Carlo estimator, but the variances of the estimation are generally high in many applications.

CSWA: Aggregation-Free Spatial-Temporal Community Sensing

no code implementations15 Nov 2017 Jiang Bian, Haoyi Xiong, Yanjie Fu, Sajal K. Das

In this paper, we present a novel community sensing paradigm -- {C}ommunity {S}ensing {W}ithout {A}ggregation}.

Compressive Sensing Distributed Optimization

FWDA: a Fast Wishart Discriminant Analysis with its Application to Electronic Health Records Data Classification

no code implementations25 Apr 2017 Haoyi Xiong, Wei Cheng, Wenqing Hu, Jiang Bian, Zhishan Guo

Classical LDA for EHR data classification, however, suffers from two handicaps: the ill-posed estimation of LDA parameters (e. g., covariance matrix), and the "linear inseparability" of EHR data.

Classification General Classification

Provably Good Early Detection of Diseases using Non-Sparse Covariance-Regularized Linear Discriminant Analysis

no code implementations18 Oct 2016 Haoyi Xiong, Yanjie Fu, Wenqing Hu, Guanling Chen, Laura E. Barnes

To improve the performance of Linear Discriminant Analysis (LDA) for early detection of diseases using Electronic Health Records (EHR) data, we propose \TheName{} -- a novel framework for \emph{\underline{E}HR based \underline{E}arly \underline{D}etection of \underline{D}iseases} on top of \emph{Covariance-Regularized} LDA models.

General Classification

CT-Mapper: Mapping Sparse Multimodal Cellular Trajectories using a Multilayer Transportation Network

no code implementations22 Apr 2016 Fereshteh Asgari, Alexis Sultan, Haoyi Xiong, Vincent Gauthier, Mounim El-Yacoubi

One of the main strengths of CT-Mapper is its capability to map noisy sparse cellular multimodal trajectories over a multilayer transportation network where the layers have different physical properties and not only to map trajectories associated with a single layer.

Cannot find the paper you are looking for? You can Submit a new open access paper.