Search Results for author: Haoyi Xiong

Found 74 papers, 27 papers with code

Explanations of Classifiers Enhance Medical Image Segmentation via End-to-end Pre-training

no code implementations16 Jan 2024 Jiamin Chen, Xuhong LI, Yanwu Xu, Mengnan Du, Haoyi Xiong

Based on a large-scale medical image classification dataset, our work collects explanations from well-trained classifiers to generate pseudo labels of segmentation tasks.

Image Classification Image Segmentation +4

Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

no code implementations9 Jan 2024 Haoyi Xiong, Xuhong LI, Xiaofei Zhang, Jiamin Chen, Xinhao Sun, Yuchen Li, Zeyi Sun, Mengnan Du

Given the complexity and lack of transparency in deep neural networks (DNNs), extensive efforts have been made to make these systems more interpretable or explain their behaviors in accessible terms.

Data Valuation Decision Making +2

TiC: Exploring Vision Transformer in Convolution

1 code implementation6 Oct 2023 Song Zhang, Qingzhong Wang, Jiang Bian, Haoyi Xiong

While models derived from Vision Transformers (ViTs) have been phonemically surging, pre-trained models cannot seamlessly adapt to arbitrary resolution images without altering the architecture and configuration, such as sampling the positional encoding, limiting their flexibility for various vision tasks.

Image Classification

CUPre: Cross-domain Unsupervised Pre-training for Few-Shot Cell Segmentation

no code implementations6 Oct 2023 Weibin Liao, Xuhong LI, Qingzhong Wang, Yanwu Xu, Zhaozheng Yin, Haoyi Xiong

While pre-training on object detection tasks, such as Common Objects in Contexts (COCO) [1], could significantly boost the performance of cell segmentation, it still consumes on massive fine-annotated cell images [2] with bounding boxes, masks, and cell types for every cell in every image, to fine-tune the pre-trained model.

Cell Segmentation Contrastive Learning +6

MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep Models for X-ray Images of Multiple Body Parts

no code implementations3 Oct 2023 Weibin Liao, Haoyi Xiong, Qingzhong Wang, Yan Mo, Xuhong LI, Yi Liu, Zeyu Chen, Siyu Huang, Dejing Dou

In this work, we study a novel self-supervised pre-training pipeline, namely Multi-task Self-super-vised Continual Learning (MUSCLE), for multiple medical imaging tasks, such as classification and segmentation, using X-ray images collected from multiple body parts, including heads, lungs, and bones.

Continual Learning Representation Learning +1

Natural Language based Context Modeling and Reasoning for Ubiquitous Computing with Large Language Models: A Tutorial

no code implementations24 Sep 2023 Haoyi Xiong, Jiang Bian, Sijia Yang, Xiaofei Zhang, Linghe Kong, Daqing Zhang

Recently, with the rise of LLMs and their improved natural language understanding and reasoning capabilities, it has become feasible to model contexts using natural language and perform context reasoning by interacting with LLMs such as ChatGPT and GPT-4.

Natural Language Understanding Scheduling

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

1 code implementation16 Apr 2023 Wenke Xia, Xingjian Li, Andong Deng, Haoyi Xiong, Dejing Dou, Di Hu

However, such semantic consistency from the synchronization is hard to guarantee in unconstrained videos, due to the irrelevant modality noise and differentiated semantic correlation.

Action Recognition Audio Tagging +3

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

no code implementations1 Apr 2023 Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou

While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.

Video4MRI: An Empirical Study on Brain Magnetic Resonance Image Analytics with CNN-based Video Classification Frameworks

no code implementations24 Feb 2023 Yuxuan Zhang, Qingzhong Wang, Jiang Bian, Yi Liu, Yanwu Xu, Dejing Dou, Haoyi Xiong

Due to the high similarity between MRI data and videos, we conduct extensive empirical studies on video recognition techniques for MRI classification to answer the questions: (1) can we directly use video recognition models for MRI classification, (2) which model is more appropriate for MRI, (3) are the common tricks like data augmentation in video recognition still useful for MRI classification?

Classification Data Augmentation +3

Towards Long-Term Time-Series Forecasting: Feature, Pattern, and Distribution

1 code implementation5 Jan 2023 Yan Li, Xinjiang Lu, Haoyi Xiong, Jian Tang, Jiantao Su, Bo Jin, Dejing Dou

Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.

Time Series Time Series Forecasting +1

Temporal Output Discrepancy for Loss Estimation-based Active Learning

no code implementations20 Dec 2022 Siyu Huang, Tianyang Wang, Haoyi Xiong, Bihan Wen, Jun Huan, Dejing Dou

Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this paper we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.

Active Learning Image Classification +1

Learning from Training Dynamics: Identifying Mislabeled Data Beyond Manually Designed Features

1 code implementation19 Dec 2022 Qingrui Jia, Xuhong LI, Lei Yu, Jiang Bian, Penghao Zhao, Shupeng Li, Haoyi Xiong, Dejing Dou

While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power.

MedSegDiff: Medical Image Segmentation with Diffusion Probabilistic Model

2 code implementations1 Nov 2022 Junde Wu, Rao Fu, Huihui Fang, Yu Zhang, Yehui Yang, Haoyi Xiong, Huiying Liu, Yanwu Xu

Inspired by the success of DPM, we propose the first DPM based model toward general medical image segmentation tasks, which we named MedSegDiff.

Anomaly Detection Brain Tumor Segmentation +8

AA-Forecast: Anomaly-Aware Forecast for Extreme Events

1 code implementation21 Aug 2022 Ashkan Farhangi, Jiang Bian, Arthur Huang, Haoyi Xiong, Jun Wang, Zhishan Guo

Moreover, the framework employs a dynamic uncertainty optimization algorithm that reduces the uncertainty of forecasts in an online manner.

Anomaly Forecasting Management +3

$\textbf{P$^2$A}$: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos

no code implementations26 Jul 2022 Jiang Bian, Qingzhong Wang, Haoyi Xiong, Jun Huang, Chen Liu, Xuhong LI, Jun Cheng, Jun Zhao, Feixiang Lu, Dejing Dou

While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.

Action Detection Action Localization +2

Pareto Optimization for Active Learning under Out-of-Distribution Data Scenarios

no code implementations4 Jul 2022 Xueying Zhan, Zeyu Dai, Qingzhong Wang, Qing Li, Haoyi Xiong, Dejing Dou, Antoni B. Chan

In this paper, we propose a sampling scheme, Monte-Carlo Pareto Optimization for Active Learning (POAL), which selects optimal subsets of unlabeled samples with fixed batch size from the unlabeled data pool.

Active Learning

Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models

2 code implementations4 Jul 2022 Xuhong LI, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, Dejing Dou

Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i. e., backbone+segmentation modules) in an end-to-end manner.

Classification Image Classification +3

A Survey on Video Action Recognition in Sports: Datasets, Methods and Applications

1 code implementation2 Jun 2022 Fei Wu, Qingzhong Wang, Jian Bian, Haoyi Xiong, Ning Ding, Feixiang Lu, Jun Cheng, Dejing Dou

Finally, we discuss the challenges and unsolved problems in this area and to facilitate sports analytics, we develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.

Action Recognition Sports Analytics +1

SE-MoE: A Scalable and Efficient Mixture-of-Experts Distributed Training and Inference System

1 code implementation20 May 2022 Liang Shen, Zhihua Wu, Weibao Gong, Hongxiang Hao, Yangfan Bai, HuaChao Wu, Xinxuan Wu, Jiang Bian, Haoyi Xiong, dianhai yu, Yanjun Ma

With the increasing diversity of ML infrastructures nowadays, distributed training over heterogeneous computing systems is desired to facilitate the production of big models.

Distributed Computing

A Simple yet Effective Framework for Active Learning to Rank

no code implementations20 May 2022 Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin

To handle the diverse query requests from users at web-scale, Baidu has done tremendous efforts in understanding users' queries, retrieve relevant contents from a pool of trillions of webpages, and rank the most relevant webpages on the top of results.

Active Learning Learning-To-Rank

A Comparative Survey of Deep Active Learning

1 code implementation25 Mar 2022 Xueying Zhan, Qingzhong Wang, Kuan-Hao Huang, Haoyi Xiong, Dejing Dou, Antoni B. Chan

In this work, We construct a DAL toolkit, DeepAL+, by re-implementing 19 highly-cited DAL methods.

Active Learning

Towards Inadequately Pre-trained Models in Transfer Learning

no code implementations ICCV 2023 Andong Deng, Xingjian Li, Di Hu, Tianyang Wang, Haoyi Xiong, Chengzhong Xu

Based on the contradictory phenomenon between FE and FT that better feature extractor fails to be fine-tuned better accordingly, we conduct comprehensive analyses on features before softmax layer to provide insightful explanations.

Transfer Learning

PP-HumanSeg: Connectivity-Aware Portrait Segmentation with a Large-Scale Teleconferencing Video Dataset

1 code implementation14 Dec 2021 Lutao Chu, Yi Liu, Zewu Wu, Shiyu Tang, Guowei Chen, Yuying Hao, Juncai Peng, Zhiliang Yu, Zeyu Chen, Baohua Lai, Haoyi Xiong

This work is the first to construct a large-scale video portrait dataset that contains 291 videos from 23 conference scenes with 14K fine-labeled frames and extensions to multi-camera teleconferencing.

Portrait Segmentation Segmentation +1

SenseMag: Enabling Low-Cost Traffic Monitoring using Non-invasive Magnetic Sensing

no code implementations24 Oct 2021 Kafeng Wang, Haoyi Xiong, Jie Zhang, Hongyang Chen, Dejing Dou, Cheng-Zhong Xu

Extensive experiment based on real-word field deployment (on the highways in Shenzhen, China) shows that SenseMag significantly outperforms the existing methods in both classification accuracy and the granularity of vehicle types (i. e., 7 types by SenseMag versus 4 types by the existing work in comparisons).

Management

AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow

no code implementations7 Oct 2021 Haiyan Jiang, Haoyi Xiong, Dongrui Wu, Ji Liu, Dejing Dou

Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.

Dimensionality Reduction Model Selection

Exploring the Common Principal Subspace of Deep Features in Neural Networks

no code implementations6 Oct 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.

Image Reconstruction Self-Supervised Learning

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Generalized Tasks

no code implementations29 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Yang Cao, Yu Kang, Haifeng Wang

While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs).

Meta-Learning

AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators

1 code implementation21 Sep 2021 Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, Xiang Zhang

Most of the existing contrastive learning methods employ pre-defined view generation methods, e. g., node drop or edge perturbation, which usually cannot adapt to input data or preserve the original semantic structures well.

Contrastive Learning Graph Representation Learning +3

Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning

2 code implementations8 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang, Haifeng Wang

In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating the neural connections based on the inputs, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e. g., meta-learning.

Memorization Meta-Learning

Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

no code implementations2 Sep 2021 Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou

Existing interpretation algorithms have found that, even deep models make the same and right predictions on the same image, they might rely on different sets of input features for classification.

Attribute Image Classification +2

Semi-Supervised Active Learning with Temporal Output Discrepancy

1 code implementation ICCV 2021 Siyu Huang, Tianyang Wang, Haoyi Xiong, Jun Huan, Dejing Dou

To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset.

Active Learning Image Classification +1

Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity

1 code implementation21 Jul 2021 Shuangli Li, Jingbo Zhou, Tong Xu, Liang Huang, Fan Wang, Haoyi Xiong, Weili Huang, Dejing Dou, Hui Xiong

To this end, we propose a structure-aware interactive graph neural network (SIGN) which consists of two components: polar-inspired graph attention layers (PGAL) and pairwise interactive pooling (PiPool).

Drug Discovery Graph Attention +1

Face.evoLVe: A High-Performance Face Recognition Library

1 code implementation19 Jul 2021 Qingzhong Wang, Pengfei Zhang, Haoyi Xiong, Jian Zhao

In this paper, we develop face. evoLVe -- a comprehensive library that collects and implements a wide range of popular deep learning-based methods for face recognition.

Face Alignment Face Recognition +1

From Personalized Medicine to Population Health: A Survey of mHealth Sensing Techniques

no code implementations2 Jul 2021 Zhiyuan Wang, Haoyi Xiong, Jie Zhang, Sijia Yang, Mehdi Boukhechba, Laura E. Barnes, Daqing Zhang, Dejing Dou

Mobile Sensing Apps have been widely used as a practical approach to collect behavioral and health-related information from individuals and provide timely intervention to promote health and well-beings, such as mental health and chronic cares.

Robust Matrix Factorization with Grouping Effect

no code implementations25 Jun 2021 Haiyan Jiang, Shuyu Li, Luwei Zhang, Haoyi Xiong, Dejing Dou

Compared with existing algorithms, the proposed GRMF can automatically learn the grouping structure and sparsity in MF without prior knowledge, by introducing a naturally adjustable non-convex regularization to achieve simultaneous sparsity and grouping effect.

Denoising

Practical Assessment of Generalization Performance Robustness for Deep Networks via Contrastive Examples

no code implementations20 Jun 2021 Xuanyu Wu, Xuhong LI, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou

Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the generalization performance of deep models in complement with a testing set.

Contrastive Learning

JIZHI: A Fast and Cost-Effective Model-As-A-Service System for Web-Scale Online Inference at Baidu

1 code implementation3 Jun 2021 Hao liu, Qian Gao, Jiang Li, Xiaochao Liao, Hao Xiong, Guangxing Chen, Wenlin Wang, Guobao Yang, Zhiwei Zha, daxiang dong, Dejing Dou, Haoyi Xiong

In this work, we present JIZHI - a Model-as-a-Service system - that per second handles hundreds of millions of online inference requests to huge deep models with more than trillions of sparse parameters, for over twenty real-time recommendation services at Baidu, Inc.

Recommendation Systems

Optimization Variance: Exploring Generalization Properties of DNNs

1 code implementation3 Jun 2021 Xiao Zhang, Dongrui Wu, Haoyi Xiong, Bo Dai

Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent.

Learning Theory

From Distributed Machine Learning to Federated Learning: A Survey

no code implementations29 Apr 2021 Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou

Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.

BIG-bench Machine Learning Federated Learning

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning

no code implementations25 Mar 2021 Xingjian Li, Haoyi Xiong, Chengzhong Xu, Dejing Dou

Performing mixup for transfer learning with pre-trained models however is not that simple, a high capacity pre-trained model with a large fully-connected (FC) layer could easily overfit to the target dataset even with samples-to-labels mixed up.

Transfer Learning

Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond

1 code implementation19 Mar 2021 Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou

Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms.

Adversarial Robustness

Democratizing Evaluation of Deep Model Interpretability through Consensus

no code implementations1 Jan 2021 Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Yanjie Fu, Dejing Dou

Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e. g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting.

Feature Importance

Can We Use Gradient Norm as a Measure of Generalization Error for Model Selection in Practice?

no code implementations1 Jan 2021 Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu

The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.

Model Selection

Empirical Studies on the Convergence of Feature Spaces in Deep Learning

no code implementations1 Jan 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.

Image Reconstruction Self-Supervised Learning

Implicit Regularization Effects of Unbiased Random Label Noises with SGD

no code implementations1 Jan 2021 Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu

Random label noises (or observational noises) widely exist in practical machinelearning settings.

C-Watcher: A Framework for Early Detection of High-Risk Neighborhoods Ahead of COVID-19 Outbreak

no code implementations22 Dec 2020 Congxi Xiao, Jingbo Zhou, Jizhou Huang, An Zhuo, Ji Liu, Haoyi Xiong, Dejing Dou

Furthermore, to transfer the firsthand knowledge (witted in epicenters) to the target city before local outbreaks, we adopt a novel adversarial encoder framework to learn "city-invariant" representations from the mobility-related features for precise early detection of high-risk neighborhoods, even before any confirmed cases known, in the target city.

Distance-aware Molecule Graph Attention Network for Drug-Target Binding Affinity Prediction

1 code implementation17 Dec 2020 Jingbo Zhou, Shuangli Li, Liang Huang, Haoyi Xiong, Fan Wang, Tong Xu, Hui Xiong, Dejing Dou

The hierarchical attentive aggregation can capture spatial dependencies among atoms, as well as fuse the position-enhanced information with the capability of discriminating multiple spatial relations among atoms.

Drug Discovery Graph Attention +2

Towards Accurate Knowledge Transfer via Target-awareness Representation Disentanglement

no code implementations16 Oct 2020 Xingjian Li, Di Hu, Xuhong LI, Haoyi Xiong, Zhi Ye, Zhipeng Wang, Chengzhong Xu, Dejing Dou

Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples.

Disentanglement Transfer Learning

XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-domain Mixup

no code implementations20 Jul 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou

While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy.

Transfer Learning

Generating Person Images with Appearance-aware Pose Stylizer

1 code implementation17 Jul 2020 Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou

Generation of high-quality person images is challenging, due to the sophisticated entanglements among image factors, e. g., appearance, pose, foreground, background, local details, global structures, etc.

Image Generation

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

1 code implementation ICML 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou

RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning, while the effects of randomization can be easily converged throughout the overall learning procedure.

Transfer Learning

Quantifying the Economic Impact of COVID-19 in Mainland China Using Human Mobility Data

no code implementations6 May 2020 Jizhou Huang, Haifeng Wang, Haoyi Xiong, Miao Fan, An Zhuo, Ying Li, Dejing Dou

While these strategies have effectively dealt with the critical situations of outbreaks, the combination of the pandemic and mobility controls has slowed China's economic growth, resulting in the first quarterly decline of Gross Domestic Product (GDP) since GDP began to be calculated, in 1992.

Rethink the Connections among Generalization, Memorization and the Spectral Bias of DNNs

1 code implementation29 Apr 2020 Xiao Zhang, Haoyi Xiong, Dongrui Wu

Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance, challenging the bias-variance trade-off in classical learning theory.

Learning Theory Memorization

COLAM: Co-Learning of Deep Neural Networks and Soft Labels via Alternating Minimization

no code implementations26 Apr 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Dejing Dou, Chengzhong Xu

Softening labels of training datasets with respect to data representations has been frequently used to improve the training of deep neural networks (DNNs).

General Classification

Parameter-Free Style Projection for Arbitrary Style Transfer

1 code implementation17 Mar 2020 Siyu Huang, Haoyi Xiong, Tianyang Wang, Bihan Wen, Qingzhong Wang, Zeyu Chen, Jun Huan, Dejing Dou

This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer, which includes a regularization term for matching the semantics between input contents and stylized outputs.

Style Transfer

Curriculum Audiovisual Learning

no code implementations26 Jan 2020 Di Hu, Zheng Wang, Haoyi Xiong, Dong Wang, Feiping Nie, Dejing Dou

Associating sound and its producer in complex audiovisual scene is a challenging task, especially when we are lack of annotated training data.

Clustering

Ultrafast Photorealistic Style Transfer via Neural Architecture Search

no code implementations5 Dec 2019 Jie An, Haoyi Xiong, Jun Huan, Jiebo Luo

Our method consists of a construction step (C-step) to build a photorealistic stylization network and a pruning step (P-step) for acceleration.

Network Pruning Neural Architecture Search +1

SecureGBM: Secure Multi-Party Gradient Boosting

no code implementations27 Nov 2019 Zhi Fengy, Haoyi Xiong, Chuanyuan Song, Sijia Yang, Baoxin Zhao, Licheng Wang, Zeyu Chen, Shengwen Yang, Li-Ping Liu, Jun Huan

Our experiments using the real-world data showed that SecureGBM can well secure the communication and computation of LightGBM training and inference procedures for the both parties while only losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets.

Towards Making Deep Transfer Learning Never Hurt

no code implementations18 Nov 2019 Ruosi Wan, Haoyi Xiong, Xingjian Li, Zhanxing Zhu, Jun Huan

The empirical results show that the proposed descent direction estimation strategy DTNH can always improve the performance of deep transfer learning tasks based on all above regularizers, even when transferring pre-trained weights from inappropriate networks.

Knowledge Distillation Transfer Learning

Fast Universal Style Transfer for Artistic and Photorealistic Rendering

no code implementations6 Jul 2019 Jie An, Haoyi Xiong, Jiebo Luo, Jun Huan, Jinwen Ma

Given a pair of images as the source of content and the reference of style, existing solutions usually first train an auto-encoder (AE) to reconstruct the image using deep features and then embeds pre-defined style transfer modules into the AE reconstruction procedure to transfer the style of the reconstructed image through modifying the deep features.

Style Transfer

On the Noisy Gradient Descent that Generalizes as SGD

1 code implementation ICML 2020 Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, Zhanxing Zhu

The gradient noise of SGD is considered to play a central role in the observed strong generalization abilities of deep learning.

StyleNAS: An Empirical Study of Neural Architecture Search to Uncover Surprisingly Fast End-to-End Universal Style Transfer Networks

no code implementations6 Jun 2019 Jie An, Haoyi Xiong, Jinwen Ma, Jiebo Luo, Jun Huan

Finally compared to existing universal style transfer networks for photorealistic rendering such as PhotoWCT that stacks multiple well-trained auto-encoders and WCT transforms in a non-end-to-end manner, the architectures designed by StyleNAS produce better style-transferred images with details preserving, using a tiny number of operators/parameters, and enjoying around 500x inference time speed-up.

Image Classification Neural Architecture Search +4

SHE2: Stochastic Hamiltonian Exploration and Exploitation for Derivative-Free Optimization

no code implementations ICLR 2019 Haoyi Xiong, Wenqing Hu, Zhanxing Zhu, Xinjian Li, Yunchao Zhang, Jun Huan

Derivative-free optimization (DFO) using trust region methods is frequently used for machine learning applications, such as (hyper-)parameter optimization without the derivatives of objective functions known.

BIG-bench Machine Learning Text-to-Image Generation

Quasi-potential as an implicit regularizer for the loss function in the stochastic gradient descent

no code implementations18 Jan 2019 Wenqing Hu, Zhanxing Zhu, Haoyi Xiong, Jun Huan

We show in this case that the quasi-potential function is related to the noise covariance structure of SGD via a partial differential equation of Hamilton-Jacobi type.

Relation Variational Inference

Neural Control Variates for Variance Reduction

no code implementations1 Jun 2018 Ruosi Wan, Mingjun Zhong, Haoyi Xiong, Zhanxing Zhu

In statistics and machine learning, approximation of an intractable integration is often achieved by using the unbiased Monte Carlo estimator, but the variances of the estimation are generally high in many applications.

CSWA: Aggregation-Free Spatial-Temporal Community Sensing

no code implementations15 Nov 2017 Jiang Bian, Haoyi Xiong, Yanjie Fu, Sajal K. Das

In this paper, we present a novel community sensing paradigm -- {C}ommunity {S}ensing {W}ithout {A}ggregation}.

Compressive Sensing Distributed Optimization

FWDA: a Fast Wishart Discriminant Analysis with its Application to Electronic Health Records Data Classification

no code implementations25 Apr 2017 Haoyi Xiong, Wei Cheng, Wenqing Hu, Jiang Bian, Zhishan Guo

Classical LDA for EHR data classification, however, suffers from two handicaps: the ill-posed estimation of LDA parameters (e. g., covariance matrix), and the "linear inseparability" of EHR data.

Classification General Classification

CT-Mapper: Mapping Sparse Multimodal Cellular Trajectories using a Multilayer Transportation Network

no code implementations22 Apr 2016 Fereshteh Asgari, Alexis Sultan, Haoyi Xiong, Vincent Gauthier, Mounim El-Yacoubi

One of the main strengths of CT-Mapper is its capability to map noisy sparse cellular multimodal trajectories over a multilayer transportation network where the layers have different physical properties and not only to map trajectories associated with a single layer.

Cannot find the paper you are looking for? You can Submit a new open access paper.