Search Results for author: Chang Xu

Found 190 papers, 86 papers with code

On Dropping Clusters to Regularize Graph Convolutional Neural Networks

no code implementations ECCV 2020 Xikun Zhang, Chang Xu, DaCheng Tao

Dropout has been widely adopted to regularize graph convolutional networks (GCNs) by randomly zeroing entries of the node feature vectors and obtains promising performance on various tasks.

Action Recognition Skeleton Based Action Recognition

Detecting Every Object from Events

2 code implementations8 Apr 2024 Haitian Zhang, Chang Xu, Xinya Wang, Bingde Liu, Guang Hua, Lei Yu, Wen Yang

Object detection is critical in autonomous driving, and it is more practical yet challenging to localize objects of unknown categories: an endeavour known as Class-Agnostic Object Detection (CAOD).

Autonomous Driving Class-agnostic Object Detection +5

NeRF2Points: Large-Scale Point Cloud Generation From Street Views' Radiance Field Optimization

no code implementations7 Apr 2024 Peng Tu, Xun Zhou, Mingming Wang, Xiaojun Yang, Bo Peng, Ping Chen, Xiu Su, Yawen Huang, Yefeng Zheng, Chang Xu

Neural Radiance Fields (NeRF) have emerged as a paradigm-shifting methodology for the photorealistic rendering of objects and environments, enabling the synthesis of novel viewpoints with remarkable fidelity.

Autonomous Vehicles Point Cloud Generation

Towards Memorization-Free Diffusion Models

no code implementations1 Apr 2024 Chen Chen, Daochang Liu, Chang Xu

Pretrained diffusion models and their outputs are widely accessible due to their exceptional capacity for synthesizing high-quality images and their open-source nature.

Denoising Memorization

ConGeo: Robust Cross-view Geo-localization across Ground View Variations

no code implementations20 Mar 2024 Li Mi, Chang Xu, Javiera Castillo-Navarro, Syrielle Montariol, Wen Yang, Antoine Bosselut, Devis Tuia

Cross-view geo-localization aims at localizing a ground-level query image by matching it to its corresponding geo-referenced aerial view.

Learning Cross-view Visual Geo-localization without Ground Truth

no code implementations19 Mar 2024 Haoyuan Li, Chang Xu, Wen Yang, Huai Yu, Gui-Song Xia

We observe that training on unlabeled cross-view images presents significant challenges, including the need to establish relationships within unlabeled data and reconcile view discrepancies between uncertain queries and references.

Self-Supervised Learning

Collage Prompting: Budget-Friendly Visual Recognition with GPT-4V

no code implementations18 Mar 2024 Siyu Xu, Yunke Wang, Daochang Liu, Chang Xu

Based on the observation that the accuracy of GPT-4V's image recognition varies significantly with the order of images within the collage prompt, our method further learns to optimize the arrangement of images for maximum recognition accuracy.

Navigate

Understanding Robustness of Visual State Space Models for Image Classification

no code implementations16 Mar 2024 Chengbin Du, Yanxi Li, Chang Xu

VMamba exhibits exceptional generalizability with out-of-distribution data but shows scalability weaknesses against natural adversarial examples and common corruptions.

Adversarial Robustness Image Classification

EfficientVMamba: Atrous Selective Scan for Light Weight Visual Mamba

1 code implementation15 Mar 2024 Xiaohuan Pei, Tao Huang, Chang Xu

Inspired by this, this work proposes to explore the potential of visual state space models in light-weight model design and introduce a novel efficient model variant dubbed EfficientVMamba.

Language Modelling

LocalMamba: Visual State Space Model with Windowed Selective Scan

1 code implementation14 Mar 2024 Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, Chang Xu

This paper posits that the key to enhancing Vision Mamba (ViM) lies in optimizing scan directions for sequence modeling.

Active Generation for Image Classification

no code implementations11 Mar 2024 Tao Huang, Jiaqi Liu, Shan You, Chang Xu

Recently, the growing capabilities of deep generative models have underscored their potential in enhancing image classification accuracy.

Active Learning Classification +3

MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process

1 code implementation9 Mar 2024 Xinyao Fan, Yueying Wu, Chang Xu, Yuhao Huang, Weiqing Liu, Jiang Bian

However, the effective utilization of their strong modeling ability in the probabilistic time series forecasting task remains an open question, partially due to the challenge of instability arising from their stochastic nature.

Probabilistic Time Series Forecasting Time Series +1

Data-efficient Large Vision Models through Sequential Autoregression

1 code implementation7 Feb 2024 Jianyuan Guo, Zhiwei Hao, Chengcheng Wang, Yehui Tang, Han Wu, Han Hu, Kai Han, Chang Xu

Training general-purpose vision models on purely sequential visual data, eschewing linguistic inputs, has heralded a new frontier in visual understanding.

Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models

1 code implementation6 Feb 2024 Jianyuan Guo, Hanting Chen, Chengcheng Wang, Kai Han, Chang Xu, Yunhe Wang

Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment.

Few-Shot Learning Knowledge Distillation +1

Accelerated Cloud for Artificial Intelligence (ACAI)

no code implementations30 Jan 2024 Dachi Chen, Weitian Ding, Chen Liang, Chang Xu, Junwei Zhang, Majd Sakr

Training an effective Machine learning (ML) model is an iterative process that requires effort in multiple dimensions.

Scheduling

Visual Imitation Learning with Calibrated Contrastive Representation

no code implementations21 Jan 2024 Yunke Wang, Linwei Tao, Bo Du, Yutian Lin, Chang Xu

Adversarial Imitation Learning (AIL) allows the agent to reproduce expert behavior with low-dimensional states and actions.

Contrastive Learning Imitation Learning

Robust Tiny Object Detection in Aerial Images amidst Label Noise

no code implementations16 Jan 2024 Haoran Zhu, Chang Xu, Wen Yang, Ruixiang Zhang, Yan Zhang, Gui-Song Xia

In this study, we address the intricate issue of tiny object detection under noisy label supervision.

Denoising Object +2

Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundation Models

1 code implementation NeurIPS 2023 Yichao Cao, Qingfei Tang, Xiu Su, Chen Song, Shan You, Xiaobo Lu, Chang Xu

We conduct a deep analysis of the three hierarchical features inherent in visual HOI detectors and propose a method for high-level relation extraction aimed at VL foundation models, which we call HO prompt-based learning.

Human-Object Interaction Detection Relation Extraction +1

One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation

1 code implementation NeurIPS 2023 Zhiwei Hao, Jianyuan Guo, Kai Han, Yehui Tang, Han Hu, Yunhe Wang, Chang Xu

To tackle the challenge in distilling heterogeneous models, we propose a simple yet effective one-for-all KD framework called OFA-KD, which significantly improves the distillation performance between heterogeneous architectures.

Knowledge Distillation

Imitation Learning from Purified Demonstration

no code implementations11 Oct 2023 Yunke Wang, Minjing Dong, Bo Du, Chang Xu

To tackle these problems, we propose to purify the potential perturbations in imperfect demonstrations and subsequently conduct imitation learning from purified demonstrations.

Imitation Learning

Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation Robustness via Hypernetworks

no code implementations28 Sep 2023 Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu

Adversarial training serves as one of the most popular and effective methods to defend against adversarial perturbations.

MM-NeRF: Multimodal-Guided 3D Multi-Style Transfer of Neural Radiance Field

no code implementations24 Sep 2023 Zijiang Yang, Zhongwei Qiu, Chang Xu, Dongmei Fu

3D style transfer aims to generate stylized views of 3D scenes with specified styles, which requires high-quality generating and keeping multi-view consistency.

Incremental Learning Style Transfer

Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization

no code implementations18 Sep 2023 Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu

Moreover, to ameliorate the phenomenon of sub-optimization with one fixed style, we propose to discover the optimal style given a target through style optimization in a continuous relaxation manner.

Face Recognition

A Benchmark Study on Calibration

no code implementations23 Aug 2023 Linwei Tao, Younan Zhu, Haolan Guo, Minjing Dong, Chang Xu

As far as we are aware, our research represents the first large-scale investigation into calibration properties and the premier study of calibration issues within NAS.

Neural Architecture Search

Boosting Diffusion Models with an Adaptive Momentum Sampler

no code implementations23 Aug 2023 Xiyu Wang, Anh-Dung Dinh, Daochang Liu, Chang Xu

Our proposed sampler can be readily applied to a pre-trained diffusion model, utilizing momentum mechanisms and adaptive updating to smooth the reverse sampling process and ensure stable generation, resulting in outputs of enhanced quality.

Efficient Transfer Learning in Diffusion Models via Adversarial Noise

no code implementations23 Aug 2023 Xiyu Wang, Baijiong Lin, Daochang Liu, Chang Xu

Diffusion Probabilistic Models (DPMs) have demonstrated substantial promise in image generation tasks but heavily rely on the availability of large amounts of training data.

Denoising Image Generation +1

CoNe: Contrast Your Neighbours for Supervised Image Classification

1 code implementation21 Aug 2023 Mingkai Zheng, Shan You, Lang Huang, Xiu Su, Fei Wang, Chen Qian, Xiaogang Wang, Chang Xu

Moreover, to further boost the performance, we propose ``distributional consistency" as a more informative regularization to enable similar instances to have a similar probability distribution.

Classification Image Classification

Microstructure-Empowered Stock Factor Extraction and Utilization

no code implementations16 Aug 2023 Xianfeng Jiao, Zizhong Li, Chang Xu, Yang Liu, Weiqing Liu, Jiang Bian

To address these challenges, we propose a novel framework that aims to effectively extract essential factors from order flow data for diverse downstream tasks across different granularities and scenarios.

Stock Trend Prediction

Fingerprints of Generative Models in the Frequency Domain

no code implementations29 Jul 2023 Tianyun Yang, Juan Cao, Danding Wang, Chang Xu

It is verified in existing works that CNN-based generative models leave unique fingerprints on generated images.

Re-mine, Learn and Reason: Exploring the Cross-modal Semantic Correlations for Language-guided HOI detection

no code implementations ICCV 2023 Yichao Cao, Qingfei Tang, Feng Yang, Xiu Su, Shan You, Xiaobo Lu, Chang Xu

Human-Object Interaction (HOI) detection is a challenging computer vision task that requires visual models to address the complex interactive relationship between humans and objects and predict HOI triplets.

Human-Object Interaction Detection Sentence +1

What Can Simple Arithmetic Operations Do for Temporal Modeling?

2 code implementations ICCV 2023 Wenhao Wu, Yuxin Song, Zhun Sun, Jingdong Wang, Chang Xu, Wanli Ouyang

We conduct comprehensive ablation studies on the instantiation of ATMs and demonstrate that this module provides powerful temporal modeling capability at a low computational cost.

Action Classification Action Recognition +1

Neural Architecture Retrieval

1 code implementation16 Jul 2023 Xiaohuan Pei, Yanxi Li, Minjing Dong, Chang Xu

With the increasing number of new neural architecture designs and substantial existing neural architectures, it becomes difficult for the researchers to situate their contributions compared with existing neural architectures or establish the connections between their designs and other relevant ones.

Contrastive Learning Graph Representation Learning +1

GPT Self-Supervision for a Better Data Annotator

no code implementations7 Jun 2023 Xiaohuan Pei, Yanxi Li, Chang Xu

In the one-shot tuning phase, we sample a data from the support set as part of the prompt for GPT to generate a textual summary, which is then used to recover the original data.

One-Shot Learning Sentence

Knowledge Diffusion for Distillation

1 code implementation NeurIPS 2023 Tao Huang, Yuan Zhang, Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Chang Xu

To address this, we propose to denoise student features using a diffusion model trained by teacher features.

Denoising Image Classification +4

VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale

1 code implementation25 May 2023 Zhiwei Hao, Jianyuan Guo, Kai Han, Han Hu, Chang Xu, Yunhe Wang

The tremendous success of large models trained on extensive datasets demonstrates that scale is a key ingredient in achieving superior results.

Data Augmentation Knowledge Distillation

Dual Focal Loss for Calibration

1 code implementation23 May 2023 Linwei Tao, Minjing Dong, Chang Xu

While different variants of focal loss have been explored, it is difficult to find a balance between over-confidence and under-confidence.

Can GPT-4 Perform Neural Architecture Search?

1 code implementation21 Apr 2023 Mingkai Zheng, Xiu Su, Shan You, Fei Wang, Chen Qian, Chang Xu, Samuel Albanie

We investigate the potential of GPT-4~\cite{gpt4} to perform Neural Architecture Search (NAS) -- the task of designing effective neural architectures.

Navigate Neural Architecture Search

Dynamic Coarse-to-Fine Learning for Oriented Tiny Object Detection

1 code implementation CVPR 2023 Chang Xu, Jian Ding, Jinwang Wang, Wen Yang, Huai Yu, Lei Yu, Gui-Song Xia

Despite the exploration of adaptive label assignment in recent oriented object detectors, the extreme geometry shape and limited feature of oriented tiny objects still induce severe mismatch and imbalance issues.

object-detection Object Detection +3

Thin Films on the Skin, but not Frictional Agents, Attenuate the Percept of Pleasantness to Brushed Stimuli

no code implementations28 Feb 2023 Merat Rezaei, Saad S. Nagi, Chang Xu, Sarah McIntyre, Hakan Olausson, Gregory J. Gerling

Brushed stimuli are perceived as pleasant when stroked lightly on the skin surface of a touch receiver at certain velocities.

Friction

Two-in-one Knowledge Distillation for Efficient Facial Forgery Detection

no code implementations21 Feb 2023 Chuyang Zhou, Jiajun Huang, Daochang Liu, Chengbin Du, Siqi Ma, Surya Nepal, Chang Xu

More specifically, knowledge distillation on both the spatial and frequency branches has degraded performance than distillation only on the spatial branch.

Knowledge Distillation Vocal Bursts Valence Prediction

Unlabeled Imperfect Demonstrations in Adversarial Imitation Learning

1 code implementation13 Feb 2023 Yunke Wang, Bo Du, Chang Xu

The trajectories of an initial agent policy could be closer to those non-optimal expert demonstrations, but within the framework of adversarial imitation learning, agent policy will be optimized to cheat the discriminator and produce trajectories that are similar to those optimal expert demonstrations.

Imitation Learning

Calibrating a Deep Neural Network with Its Predecessors

1 code implementation13 Feb 2023 Linwei Tao, Minjing Dong, Daochang Liu, Changming Sun, Chang Xu

However, early stopping, as a well-known technique to mitigate overfitting, fails to calibrate networks.

Anti-Compression Contrastive Facial Forgery Detection

no code implementations13 Feb 2023 Jiajun Huang, Xinqi Zhu, Chengbin Du, Siqi Ma, Surya Nepal, Chang Xu

To enhance the performance for such models, we consider the weak compressed and strong compressed data as two views of the original data and they should have similar representation and relationships with other samples.

Contrastive Learning

Trade-Off Between Robustness and Accuracy of Vision Transformers

no code implementations CVPR 2023 Yanxi Li, Chang Xu

Although deep neural networks (DNNs) have shown great successes in computer vision tasks, they are vulnerable to perturbations on inputs, and there exists a trade-off between the natural accuracy and robustness to such perturbations, which is mainly caused by the existence of robust non-predictive features and non-robust predictive features.

Private Image Generation With Dual-Purpose Auxiliary Classifier

no code implementations CVPR 2023 Chen Chen, Daochang Liu, Siqi Ma, Surya Nepal, Chang Xu

However, apart from this standard utility, we identify the "reversed utility" as another crucial aspect, which computes the accuracy on generated data of a classifier trained using real data, dubbed as real2gen accuracy (r2g%).

Image Generation Privacy Preserving

Adversarial Robustness via Random Projection Filters

1 code implementation CVPR 2023 Minjing Dong, Chang Xu

Deep Neural Networks show superior performance in various tasks but are vulnerable to adversarial attacks.

Adversarial Robustness Attribute +1

ContraFeat: Contrasting Deep Features for Semantic Discovery

no code implementations14 Dec 2022 Xinqi Zhu, Chang Xu, DaCheng Tao

In this paper, we propose a model that automates this process and achieves state-of-the-art semantic discovery performance.

FastMIM: Expediting Masked Image Modeling Pre-training for Vision

1 code implementation13 Dec 2022 Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Yunhe Wang, Chang Xu

This paper presents FastMIM, a simple and generic framework for expediting masked image modeling with the following two steps: (i) pre-training vision backbones with low-resolution input images; and (ii) reconstructing Histograms of Oriented Gradients (HOG) feature instead of original RGB values of the input images.

GhostNetV2: Enhance Cheap Operation with Long-Range Attention

15 code implementations23 Nov 2022 Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Chao Xu, Yunhe Wang

The convolutional operation can only capture local information in a window region, which prevents performance from being further improved.

Boosting Semi-Supervised Semantic Segmentation with Probabilistic Representations

1 code implementation26 Oct 2022 Haoyu Xie, Changqi Wang, Mingkai Zheng, Minjing Dong, Shan You, Chong Fu, Chang Xu

In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space.

Contrastive Learning Semi-Supervised Semantic Segmentation

Learning Differential Operators for Interpretable Time Series Modeling

no code implementations3 Sep 2022 Yingtao Luo, Chang Xu, Yang Liu, Weiqing Liu, Shun Zheng, Jiang Bian

In this work, we propose an learning framework that can automatically obtain interpretable PDE models from sequential data.

Decision Making Meta-Learning +2

Motion Robust High-Speed Light-Weighted Object Detection With Event Camera

1 code implementation24 Aug 2022 Bingde Liu, Chang Xu, Wen Yang, Huai Yu, Lei Yu

In this work, we propose a motion robust and high-speed detection pipeline which better leverages the event data.

Data Augmentation object-detection +3

RFLA: Gaussian Receptive Field based Label Assignment for Tiny Object Detection

1 code implementation18 Aug 2022 Chang Xu, Jinwang Wang, Wen Yang, Huai Yu, Lei Yu, Gui-Song Xia

Then, instead of assigning samples with IoU or center sampling strategy, a new Receptive Field Distance (RFD) is proposed to directly measure the similarity between the Gaussian receptive field and ground truth.

Object object-detection +1

LightViT: Towards Light-Weight Convolution-Free Vision Transformers

1 code implementation12 Jul 2022 Tao Huang, Lang Huang, Shan You, Fei Wang, Chen Qian, Chang Xu

Vision transformers (ViTs) are usually considered to be less light-weight than convolutional neural networks (CNNs) due to the lack of inductive bias.

Image Classification Inductive Bias +3

Masked Distillation with Receptive Tokens

1 code implementation29 May 2022 Tao Huang, Yuan Zhang, Shan You, Fei Wang, Chen Qian, Jian Cao, Chang Xu

To obtain a group of masks, the receptive tokens are learned via the regular task loss but with teacher fixed, and we also leverage a Dice loss to enrich the diversity of learned masks.

object-detection Object Detection +1

Knowledge Distillation from A Stronger Teacher

2 code implementations21 May 2022 Tao Huang, Shan You, Fei Wang, Chen Qian, Chang Xu

In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly.

Ranked #2 on Knowledge Distillation on ImageNet (using extra training data)

Image Classification Knowledge Distillation +2

Searching for Network Width with Bilaterally Coupled Network

1 code implementation25 Mar 2022 Xiu Su, Shan You, Jiyang Xie, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu

In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.

Fairness

DyRep: Bootstrapping Training with Dynamic Re-parameterization

2 code implementations CVPR 2022 Tao Huang, Shan You, Bohan Zhang, Yuxuan Du, Fei Wang, Chen Qian, Chang Xu

Structural re-parameterization (Rep) methods achieve noticeable improvements on simple VGG-style networks.

Relational Self-Supervised Learning

no code implementations16 Mar 2022 Mingkai Zheng, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu

Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.

Contrastive Learning Relation +2

Multi-Tailed Vision Transformer for Efficient Inference

no code implementations3 Mar 2022 Yunke Wang, Bo Du, Wenyuan Wang, Chang Xu

To satisfy the sequential input of Transformer, the tail of ViT first splits each image into a sequence of visual tokens with a fixed length.

Relational Surrogate Loss Learning

1 code implementation ICLR 2022 Tao Huang, Zekang Li, Hua Lu, Yong Shan, Shusheng Yang, Yang Feng, Fei Wang, Shan You, Chang Xu

Evaluation metrics in machine learning are often hardly taken as loss functions, as they could be non-differentiable and non-decomposable, e. g., average precision and F1 score.

Image Classification Machine Reading Comprehension +3

GhostNets on Heterogeneous Devices via Cheap Operations

8 code implementations10 Jan 2022 Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chunjing Xu, Enhua Wu, Qi Tian

The proposed C-Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks.

DeepFake Disrupter: The Detector of DeepFake Is My Friend

no code implementations CVPR 2022 Xueyu Wang, Jiajun Huang, Siqi Ma, Surya Nepal, Chang Xu

We argue that the detectors do not share a similar perspective as human eyes, which might still be spoofed by the disrupted data.

Face Swapping

An Empirical Study of Adder Neural Networks for Object Detection

no code implementations NeurIPS 2021 Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang

Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications.

Autonomous Driving Face Detection +3

Handling Long-tailed Feature Distribution in AdderNets

no code implementations NeurIPS 2021 Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu

Adder neural networks (ANNs) are designed for low energy cost which replace expensive multiplications in convolutional neural networks (CNNs) with cheaper additions to yield energy-efficient neural networks and hardware accelerations.

Knowledge Distillation

Towards Stable and Robust AdderNets

no code implementations NeurIPS 2021 Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu

Adder neural network (AdderNet) replaces the original convolutions with massive multiplications by cheap additions while achieving comparable performance thus yields a series of energy-efficient neural networks.

Adversarial Robustness

GreedyNASv2: Greedier Search with a Greedy Path Filter

no code implementations CVPR 2022 Tao Huang, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu

In this paper, we leverage an explicit path filter to capture the characteristics of paths and directly filter those weak ones, so that the search can be thus implemented on the shrunk space more greedily and efficiently.

An Image Patch is a Wave: Phase-Aware Vision MLP

10 code implementations CVPR 2022 Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Yanxi Li, Chao Xu, Yunhe Wang

To dynamically aggregate tokens, we propose to represent each token as a wave function with two parts, amplitude and phase.

Image Classification object-detection +2

A Normalized Gaussian Wasserstein Distance for Tiny Object Detection

3 code implementations26 Oct 2021 Jinwang Wang, Chang Xu, Wen Yang, Lei Yu

Our key observation is that Intersection over Union (IoU) based metrics such as IoU itself and its extensions are very sensitive to the location deviation of the tiny objects, and drastically deteriorate the detection performance when used in anchor-based detectors.

Object object-detection +1

Learning Versatile Convolution Filters for Efficient Visual Recognition

no code implementations20 Sep 2021 Kai Han, Yunhe Wang, Chang Xu, Chunjing Xu, Enhua Wu, DaCheng Tao

A series of secondary filters can be derived from a primary filter with the help of binary masks.

Hire-MLP: Vision MLP via Hierarchical Rearrangement

10 code implementations CVPR 2022 Jianyuan Guo, Yehui Tang, Kai Han, Xinghao Chen, Han Wu, Chao Xu, Chang Xu, Yunhe Wang

Previous vision MLPs such as MLP-Mixer and ResMLP accept linearly flattened image patches as input, making them inflexible for different input sizes and hard to capture spatial information.

Image Classification object-detection +2

DeepFake MNIST+: A DeepFake Facial Animation Dataset

1 code implementation18 Aug 2021 Jiajun Huang, Xueyu Wang, Bo Du, Pei Du, Chang Xu

It includes 10, 000 facial animation videos in ten different actions, which can spoof the recent liveness detectors.

DeepFake Detection Face Swapping +1

Neural Architecture Dilation for Adversarial Robustness

no code implementations NeurIPS 2021 Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu

With the tremendous advances in the architecture and scale of convolutional neural networks (CNNs) over the past few decades, they can easily reach or even exceed the performance of humans in certain tasks.

Adversarial Robustness

ReSSL: Relational Self-Supervised Learning with Weak Augmentation

2 code implementations NeurIPS 2021 Mingkai Zheng, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu

Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.

Contrastive Learning Relation +2

CMT: Convolutional Neural Networks Meet Vision Transformers

14 code implementations CVPR 2022 Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, Chang Xu

Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image.

Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning

1 code implementation12 Jul 2021 Jun Wang, Chang Xu, Francisco Guzman, Ahmed El-Kishky, Yuqing Tang, Benjamin I. P. Rubinstein, Trevor Cohn

Neural machine translation systems are known to be vulnerable to adversarial test inputs, however, as we show in this paper, these systems are also vulnerable to training attacks.

Data Poisoning Machine Translation +3

Augmented Shortcuts for Vision Transformers

4 code implementations NeurIPS 2021 Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang

Transformer models have achieved great progress on computer vision tasks recently.

ViTAS: Vision Transformer Architecture Search

1 code implementation25 Jun 2021 Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu

Vision transformers (ViTs) inherited the success of NLP but their structures have not been sufficiently investigated and optimized for visual tasks.

Inductive Bias Neural Architecture Search

ReNAS: Relativistic Evaluation of Neural Architecture Search

7 code implementations CVPR 2021 Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).

Neural Architecture Search

Learning Student Networks in the Wild

1 code implementation CVPR 2021 Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang

Experiments on various datasets demonstrate that the student networks learned by the proposed method can achieve comparable performance with those using the original dataset.

Knowledge Distillation Model Compression

Positive-Unlabeled Data Purification in the Wild for Object Detection

no code implementations CVPR 2021 Jianyuan Guo, Kai Han, Han Wu, Chao Zhang, Xinghao Chen, Chunjing Xu, Chang Xu, Yunhe Wang

In this paper, we present a positive-unlabeled learning based scheme to expand training data by purifying valuable images from massive unlabeled ones, where the original training data are viewed as positive data and the unlabeled images in the wild are unlabeled data.

Knowledge Distillation object-detection +1

K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets

no code implementations11 Jun 2021 Xiu Su, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu

The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code.

Commutative Lie Group VAE for Disentanglement Learning

1 code implementation7 Jun 2021 Xinqi Zhu, Chang Xu, DaCheng Tao

Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations.

Disentanglement

Patch Slimming for Efficient Vision Transformers

no code implementations CVPR 2022 Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, DaCheng Tao

We first identify the effective patches in the last layer and then use them to guide the patch selection process of previous layers.

Efficient ViTs

Universal Adder Neural Networks

no code implementations29 May 2021 Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Chunjing Xu, Tong Zhang

The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.

BCNet: Searching for Network Width with Bilaterally Coupled Network

no code implementations CVPR 2021 Xiu Su, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu

In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.

Where and What? Examining Interpretable Disentangled Representations

1 code implementation CVPR 2021 Xinqi Zhu, Chang Xu, DaCheng Tao

We thus impose a perturbation on a certain dimension of the latent code, and expect to identify the perturbation along this dimension from the generated images so that the encoding of simple variations can be enforced.

Disentanglement Model Selection +1

Distilling Object Detectors via Decoupled Features

1 code implementation CVPR 2021 Jianyuan Guo, Kai Han, Yunhe Wang, Han Wu, Xinghao Chen, Chunjing Xu, Chang Xu

To this end, we present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector.

Image Classification Knowledge Distillation +3

Joint Distribution across Representation Space for Out-of-Distribution Detection

no code implementations23 Mar 2021 Jingwei Xu, Siyuan Zhu, Zenan Li, Chang Xu

Specifically, We construct a generative model, called Latent Sequential Gaussian Mixture (LSGM), to depict how the in-distribution latent features are generated in terms of the trace of DNN inference across representation spaces.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Prioritized Architecture Sampling with Monto-Carlo Tree Search

1 code implementation CVPR 2021 Xiu Su, Tao Huang, Yanxi Li, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu

One-shot neural architecture search (NAS) methods significantly reduce the search cost by considering the whole search space as one network, which only needs to be trained once.

Neural Architecture Search

Learning Frequency-aware Dynamic Network for Efficient Super-Resolution

no code implementations ICCV 2021 Wenbin Xie, Dehua Song, Chang Xu, Chunjing Xu, HUI ZHANG, Yunhe Wang

Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity.

Image Super-Resolution

Manifold Regularized Dynamic Network Pruning

7 code implementations CVPR 2021 Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, DaCheng Tao, Chang Xu

Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure.

Network Pruning

LocalDrop: A Hybrid Regularization for Deep Neural Networks

no code implementations1 Mar 2021 Ziqing Lu, Chang Xu, Bo Du, Takashi Ishida, Lefei Zhang, Masashi Sugiyama

In neural networks, developing regularization algorithms to settle overfitting is one of the major study areas.

Learning Frequency Domain Approximation for Binary Neural Networks

3 code implementations NeurIPS 2021 Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang

Binary neural networks (BNNs) represent original full-precision weights and activations into 1-bit with sign function.

Hero: On the Chaos When PATH Meets Modules

no code implementations24 Feb 2021 Ying Wang, Liang Qiao, Chang Xu, Yepang Liu, Shing-Chi Cheung, Na Meng, Hai Yu, Zhiliang Zhu

The results showed that \textsc{Hero} achieved a high detection rate of 98. 5\% on a DM issue benchmark and found 2, 422 new DM issues in 2, 356 popular Golang projects.

Software Engineering

REST: Relational Event-driven Stock Trend Forecasting

no code implementations15 Feb 2021 Wentao Xu, Weiqing Liu, Chang Xu, Jiang Bian, Jian Yin, Tie-Yan Liu

To remedy the first shortcoming, we propose to model the stock context and learn the effect of event information on the stocks under different contexts.

Locally Free Weight Sharing for Network Width Search

no code implementations ICLR 2021 Xiu Su, Shan You, Tao Huang, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu

In this paper, to better evaluate each width, we propose a locally free weight sharing strategy (CafeNet) accordingly.

PTN: A Poisson Transfer Network for Semi-supervised Few-shot Learning

no code implementations20 Dec 2020 Huaxi Huang, Junjie Zhang, Jian Zhang, Qiang Wu, Chang Xu

Second, the extra unlabeled samples are employed to transfer the knowledge from base classes to novel classes through contrastive learning.

Contrastive Learning Few-Shot Learning

Finite particle number description of neutron matter using the unitary correlation operator and high-momentum pair methods

no code implementations3 Dec 2020 Niu Wan, Takayuki Myo, Chang Xu, Hiroshi Toki, Hisashi Horiuchi, Mengjiao Lyu

The central short-range correlation coming from the short-range repulsion in the NN interaction is treated by the unitary correlation operator method (UCOM) and the tensor correlation and spin-orbit effects are described by the two-particle two-hole (2p2h) excitations of nucleon pairs, in which the two nucleons with a large relative momentum are regarded as a high-momentum pair (HM).

Nuclear Theory

Assessing Social License to Operate from the Public Discourse on Social Media

no code implementations COLING 2020 Chang Xu, Cecile Paris, Ross Sparks, Surya Nepal, Keith VanderLinden

Our experimental results show that SIRTA is highly effective in distilling stances from social posts for SLO level assessment, and that the continuous monitoring of SLO levels afforded by SIRTA enables the early detection of critical SLO changes.

text-classification Text Classification +2

Pre-Trained Image Processing Transformer

6 code implementations CVPR 2021 Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao

To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.

 Ranked #1 on Single Image Deraining on Rain100L (using extra training data)

Color Image Denoising Contrastive Learning +2

UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging

no code implementations NeurIPS 2020 Chu Zhou, Hang Zhao, Jin Han, Chang Xu, Chao Xu, Tiejun Huang, Boxin Shi

A conventional camera often suffers from over- or under-exposure when recording a real-world scene with a very high dynamic range (HDR).

Adapting Neural Architectures Between Domains

1 code implementation NeurIPS 2020 Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu

The power of deep neural networks is to be unleashed for analyzing a large volume of data (e. g. ImageNet), but the architecture search is often executed on another smaller dataset (e. g. CIFAR-10) to finish it in a feasible time.

Domain Adaptation Generalization Bounds +1

A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning

no code implementations2 Nov 2020 Chang Xu, Jun Wang, Yuqing Tang, Francisco Guzman, Benjamin I. P. Rubinstein, Trevor Cohn

In this paper, we show that targeted attacks on black-box NMT systems are feasible, based on poisoning a small fraction of their parallel training data.

Data Poisoning Machine Translation +2

Data Agnostic Filter Gating for Efficient Deep Networks

no code implementations28 Oct 2020 Xiu Su, Shan You, Tao Huang, Hongyan Xu, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu

To deploy a well-trained CNN model on low-end computation edge devices, it is usually supposed to compress or prune the model under certain computation budget (e. g., FLOPs).

SCOP: Scientific Control for Reliable Neural Network Pruning

4 code implementations NeurIPS 2020 Yehui Tang, Yunhe Wang, Yixing Xu, DaCheng Tao, Chunjing Xu, Chao Xu, Chang Xu

To increase the reliability of the results, we prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output.

Network Pruning

Open-Set Hypothesis Transfer with Semantic Consistency

no code implementations1 Oct 2020 Zeyu Feng, Chang Xu, DaCheng Tao

Unsupervised open-set domain adaptation (UODA) is a realistic problem where unlabeled target data contain unknown classes.

Domain Adaptation

Kernel Based Progressive Distillation for Adder Neural Networks

no code implementations NeurIPS 2020 Yixing Xu, Chang Xu, Xinghao Chen, Wei zhang, Chunjing Xu, Yunhe Wang

A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop.

Knowledge Distillation

AdderSR: Towards Energy Efficient Image Super-Resolution

no code implementations CVPR 2021 Dehua Song, Yunhe Wang, Hanting Chen, Chang Xu, Chunjing Xu, DaCheng Tao

To this end, we thoroughly analyze the relationship between an adder operation and the identity mapping and insert shortcuts to enhance the performance of SR models using adder networks.

Image Classification Image Super-Resolution

Adversarially Robust Neural Architectures

no code implementations2 Sep 2020 Minjing Dong, Yanxi Li, Yunhe Wang, Chang Xu

We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness.

Adversarial Attack Adversarial Robustness

Approximated Bilinear Modules for Temporal Modeling

1 code implementation ICCV 2019 Xinqi Zhu, Chang Xu, Langwen Hui, Cewu Lu, DaCheng Tao

Specifically, we show how two-layer subnets in CNNs can be converted to temporal bilinear modules by adding an auxiliary-branch.

Action Recognition Video Classification

Learning Disentangled Representations with Latent Variation Predictability

1 code implementation ECCV 2020 Xinqi Zhu, Chang Xu, DaCheng Tao

Given image pairs generated by latent codes varying in a single dimension, this varied dimension could be closely correlated with these image pairs if the representation is well disentangled.

Disentanglement

DeepMnemonic: Password Mnemonic Generation via Deep Attentive Encoder-Decoder Model

no code implementations24 Jun 2020 Yao Cheng, Chang Xu, Zhen Hai, Yingjiu Li

Moreover, the user study further validates that the generated mnemonic sentences by DeepMnemonic are useful in helping users memorize strong passwords.

Sentence

HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens

6 code implementations CVPR 2021 Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei zhang, Chao Xu, Chunjing Xu, DaCheng Tao, Chang Xu

To achieve an extremely fast NAS while preserving the high accuracy, we propose to identify the vital blocks and make them the priority in the architecture search.

Neural Architecture Search

TOAN: Target-Oriented Alignment Network for Fine-Grained Image Categorization with Few Labeled Samples

no code implementations28 May 2020 Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Qiang Wu, Chang Xu

The challenges of high intra-class variance yet low inter-class fluctuations in fine-grained visual categorization are more severe with few labeled samples, \textit{i. e.,} Fine-Grained categorization problems under the Few-Shot setting (FGFS).

Fine-Grained Visual Categorization

A Semi-Supervised Assessor of Neural Architectures

no code implementations CVPR 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph.

Neural Architecture Search

Automatic low-bit hybrid quantization of neural networks through meta learning

no code implementations24 Apr 2020 Tao Wang, Junsong Wang, Chang Xu, Chao Xue

With the best searched quantization policy, we subsequently retrain or finetune to further improve the performance of the quantized target network.

Meta-Learning Quantization +1

Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection

1 code implementation CVPR 2020 Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, Chang Xu

To this end, we propose a hierarchical trinity search framework to simultaneously discover efficient architectures for all components (i. e. backbone, neck, and head) of object detector in an end-to-end manner.

Image Classification Neural Architecture Search +3

K-Core based Temporal Graph Convolutional Network for Dynamic Graphs

1 code implementation22 Mar 2020 Jingxin Liu, Chang Xu, Chang Yin, Weiqiang Wu, You Song

Graph representation learning is a fundamental task in various applications that strives to learn low-dimensional embeddings for nodes that can preserve graph topology information.

Dynamic graph embedding Graph Representation Learning +1

DAN: Dual-View Representation Learning for Adapting Stance Classifiers to New Domains

no code implementations13 Mar 2020 Chang Xu, Cecile Paris, Surya Nepal, Ross Sparks, Chong Long, Yafang Wang

We address the issue of having a limited number of annotations for stance classification in a new domain, by adapting out-of-domain classifiers with domain adaptation.

Domain Adaptation Representation Learning +1

Distilling portable Generative Adversarial Networks for Image Translation

no code implementations7 Mar 2020 Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu

To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators.

Image-to-Image Translation Knowledge Distillation +1

Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks

2 code implementations23 Feb 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu

On one hand, massive trainable parameters significantly enhance the performance of these deep networks.

Discernible Image Compression

no code implementations17 Feb 2020 Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian

Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.

Image Compression object-detection +1

On Positive-Unlabeled Classification in GAN

1 code implementation CVPR 2020 Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, DaCheng Tao

In contrast, it is more reasonable to treat the generated data as unlabeled, which could be positive or negative according to their quality.

Classification General Classification

AdderNet: Do We Really Need Multiplications in Deep Learning?

7 code implementations CVPR 2020 Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.

Learning from Bad Data via Generation

no code implementations NeurIPS 2019 Tianyu Guo, Chang Xu, Boxin Shi, Chao Xu, DaCheng Tao

A worst-case formulation can be developed over this distribution set, and then be interpreted as a generation task in an adversarial manner.

GhostNet: More Features from Cheap Operations

34 code implementations CVPR 2020 Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu

Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources.

Image Classification

Operational Calibration: Debugging Confidence Errors for DNNs in the Field

no code implementations6 Oct 2019 Zenan Li, Xiaoxing Ma, Chang Xu, Jingwei Xu, Chun Cao, Jian Lü

Trained DNN models are increasingly adopted as integral parts of software systems, but they often perform deficiently in the field.

ReNAS:Relativistic Evaluation of Neural Architecture Search

4 code implementations30 Sep 2019 Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).

Neural Architecture Search

Efficient Residual Dense Block Search for Image Super-Resolution

3 code implementations25 Sep 2019 Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang

Focusing on this issue, we propose an efficient residual dense block search algorithm with multiple objectives to hunt for fast, lightweight and accurate networks for image super-resolution.

Image Super-Resolution

Positive-Unlabeled Compression on the Cloud

2 code implementations NeurIPS 2019 Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, DaCheng Tao, Chang Xu

In practice, only a small portion of the original training set is required as positive examples and more useful training examples can be obtained from the massive unlabeled data on the cloud through a PU classifier with an attention based multi-scale feature extractor.

Knowledge Distillation

CARS: Continuous Evolution for Efficient Neural Architecture Search

1 code implementation CVPR 2020 Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu

Architectures in the population that share parameters within one SuperNet in the latest generation will be tuned over the training dataset with a few epochs.

Neural Architecture Search

Full-Stack Filters to Build Minimum Viable CNNs

1 code implementation6 Aug 2019 Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, DaCheng Tao, Chang Xu

Existing works used to decrease the number or size of requested convolution filters for a minimum viable CNN on edge devices.

Learning Instance-wise Sparsity for Accelerating Deep Models

no code implementations27 Jul 2019 Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu

Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks.

Attribute Aware Pooling for Pedestrian Attribute Recognition

no code implementations27 Jul 2019 Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu

This paper expands the strength of deep convolutional neural networks (CNNs) to the pedestrian attribute recognition problem by devising a novel attribute aware pooling algorithm.

Attribute Pedestrian Attribute Recognition

Co-Evolutionary Compression for Unpaired Image Translation

2 code implementations ICCV 2019 Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, Chang Xu

Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation.

Image-to-Image Translation Translation

Bilinear Graph Networks for Visual Question Answering

no code implementations23 Jul 2019 Dalu Guo, Chang Xu, DaCheng Tao

The question-graph exchanges information between these output nodes from image-graph to amplify the implicit yet important relationship between objects.

Question Answering Visual Question Answering

Bringing Giant Neural Networks Down to Earth with Unlabeled Data

no code implementations13 Jul 2019 Yehui Tang, Shan You, Chang Xu, Boxin Shi, Chao Xu

Specifically, we exploit the unlabeled data to mimic the classification characteristics of giant networks, so that the original capacity can be preserved nicely.

Boosting Operational DNN Testing Efficiency through Conditioning

1 code implementation6 Jun 2019 Zenan Li, Xiaoxing Ma, Chang Xu, Chun Cao, Jingwei Xu, Jian Lü

With the increasing adoption of Deep Neural Network (DNN) models as integral parts of software systems, efficient operational testing of DNNs is much in demand to ensure these models' actual performance in field conditions.

DNN Testing

Recognising Agreement and Disagreement between Stances with Reason Comparing Networks

no code implementations ACL 2019 Chang Xu, Cecile Paris, Surya Nepal, Ross Sparks

We identify agreement and disagreement between utterances that express stances towards a topic of discussion.

Multi-view Vector-valued Manifold Regularization for Multi-label Image Classification

no code implementations8 Apr 2019 Yong Luo, DaCheng Tao, Chang Xu, Chao Xu, Hong Liu, Yonggang Wen

In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e. g. pedestrian, bicycle and tree) and is properly characterized by multiple visual features (e. g. color, texture and shape).

General Classification Multi-Label Image Classification

Multi-View Intact Space Learning

no code implementations4 Apr 2019 Chang Xu, DaCheng Tao, Chao Xu

In this paper, we propose the Multi-view Intact Space Learning (MISL) algorithm, which integrates the encoded complementary information in multiple views to discover a latent intact representation of the data.

MULTI-VIEW LEARNING

Cost-Sensitive Feature Selection by Optimizing F-Measures

no code implementations4 Apr 2019 Meng Liu, Chang Xu, Yong Luo, Chao Xu, Yonggang Wen, DaCheng Tao

Feature selection is beneficial for improving the performance of general machine learning tasks by extracting an informative subset from the high-dimensional features.

feature selection

Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer

2 code implementations4 Apr 2019 Xinyuan Chen, Chang Xu, Xiaokang Yang, Li Song, DaCheng Tao

We propose adversarial gated networks (Gated GAN) to transfer multiple styles in a single model.

Style Transfer

Data-Free Learning of Student Networks

3 code implementations ICCV 2019 Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

Neural Network Compression

Image-Question-Answer Synergistic Network for Visual Dialog

no code implementations CVPR 2019 Dalu Guo, Chang Xu, DaCheng Tao

Afterward, in the second stage, answers with high probability of being correct are re-ranked by synergizing with image and question.

Visual Dialog

STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

4 code implementations18 Feb 2019 Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, Surya Nepal

Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation.

Cryptography and Security

Learning Student Networks via Feature Embedding

no code implementations17 Dec 2018 Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, DaCheng Tao

Experiments on benchmark datasets and well-trained networks suggest that the proposed algorithm is superior to state-of-the-art teacher-student learning methods in terms of computational and storage complexity.

Knowledge Distillation

Modeling Local Dependence in Natural Language with Multi-channel Recurrent Neural Networks

no code implementations13 Nov 2018 Chang Xu, Weiran Huang, Hongwei Wang, Gang Wang, Tie-Yan Liu

In this paper, we propose an improved variant of RNN, Multi-Channel RNN (MC-RNN), to dynamically capture and leverage local semantic structure information.

Abstractive Text Summarization Language Modelling +2

Robust Student Network Learning

no code implementations30 Jul 2018 Tianyu Guo, Chang Xu, Shiyi He, Boxin Shi, Chao Xu, DaCheng Tao

In this way, a portable student network with significantly fewer parameters can achieve a considerable accuracy which is comparable to that of teacher network.

Cross-Target Stance Classification with Self-Attention Networks

1 code implementation ACL 2018 Chang Xu, Cecile Paris, Surya Nepal, Ross Sparks

In stance classification, the target on which the stance is made defines the boundary of the task, and a classifier is usually trained for prediction on the same target.

Classification General Classification +1

Graph Edge Convolutional Neural Networks for Skeleton Based Action Recognition

no code implementations16 May 2018 Xikun Zhang, Chang Xu, Xinmei Tian, DaCheng Tao

Considering the complementarity between graph node convolution and graph edge convolution, we additionally construct two hybrid neural networks to combine graph node convolutional neural network and graph edge convolutional neural network using shared intermediate layers.

Action Recognition Pose Estimation +2

Evolutionary Generative Adversarial Networks

3 code implementations1 Mar 2018 Chaoyue Wang, Chang Xu, Xin Yao, DaCheng Tao

In this paper, we propose a novel GAN framework called evolutionary generative adversarial networks (E-GAN) for stable GAN training and improved generative performance.

Beyond Filters: Compact Feature Map for Portable Deep Model

1 code implementation ICML 2017 Yunhe Wang, Chang Xu, Chao Xu, DaCheng Tao

The filter is then re-configured to establish the mapping from original input to the new compact feature map, and the resulting network can preserve intrinsic information of the original network with significantly fewer parameters, which not only decreases the online memory for launching CNN but also accelerates the computation speed.

Towards Evolutional Compression

no code implementations25 Jul 2017 Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, DaCheng Tao

In contrast to directly recognizing subtle weights or filters as redundant in a given CNN, this paper presents an evolutionary method to automatically eliminate redundant convolution filters.

Perceptual Adversarial Networks for Image-to-Image Transformation

2 code implementations28 Jun 2017 Chaoyue Wang, Chang Xu, Chaohui Wang, DaCheng Tao

The proposed PAN consists of two feed-forward convolutional neural networks (CNNs), the image transformation network T and the discriminative network D. Through combining the generative adversarial loss and the proposed perceptual adversarial loss, these two networks can be trained alternately to solve image-to-image transformation tasks.

Image Inpainting

Reinforcement Learning for Learning Rate Control

no code implementations31 May 2017 Chang Xu, Tao Qin, Gang Wang, Tie-Yan Liu

Stochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks.

reinforcement-learning Reinforcement Learning (RL)

Tag Disentangled Generative Adversarial Networks for Object ImageRe-rendering

no code implementations International Joint Conference on Artificial Intelligence 2017 Chaoyue Wang, Chaohui Wang, Chang Xu, DaCheng Tao

The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are complete/partially tagged(i. e., supervised/semi-supervised setting).

Object TAG

Privileged Multi-label Learning

no code implementations25 Jan 2017 Shan You, Chang Xu, Yunhe Wang, Chao Xu, DaCheng Tao

This paper presents privileged multi-label learning (PrML) to explore and exploit the relationship between labels in multi-label learning problems.

Multi-Label Learning

Streaming View Learning

no code implementations28 Apr 2016 Chang Xu, DaCheng Tao, Chao Xu

An underlying assumption in conventional multi-view learning algorithms is that all views can be simultaneously accessed.

MULTI-VIEW LEARNING

Streaming Label Learning for Modeling Labels on the Fly

no code implementations19 Apr 2016 Shan You, Chang Xu, Yunhe Wang, Chao Xu, DaCheng Tao

The core of SLL is to explore and exploit the relationships between new labels and past labels and then inherit the relationship into hypotheses of labels to boost the performance of new classifiers.

Multi-Label Learning

Parts for the Whole: The DCT Norm for Extreme Visual Recovery

no code implementations19 Apr 2016 Yunhe Wang, Chang Xu, Shan You, DaCheng Tao, Chao Xu

Here we study the extreme visual recovery problem, in which over 90\% of pixel values in a given image are missing.

Local Rademacher Complexity for Multi-label Learning

no code implementations26 Oct 2014 Chang Xu, Tongliang Liu, DaCheng Tao, Chao Xu

We analyze the local Rademacher complexity of empirical risk minimization (ERM)-based multi-label learning algorithms, and in doing so propose a new algorithm for multi-label learning.

Multi-Label Learning

A Survey on Multi-view Learning

no code implementations20 Apr 2013 Chang Xu, DaCheng Tao, Chao Xu

Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace.

MULTI-VIEW LEARNING

Cannot find the paper you are looking for? You can Submit a new open access paper.