no code implementations • 26 Nov 2023 • Jiawang Bai, Kuofeng Gao, Shaobo Min, Shu-Tao Xia, Zhifeng Li, Wei Liu
Contrastive Vision-Language Pre-training, known as CLIP, has shown promising effectiveness in addressing downstream image recognition tasks.
no code implementations • 23 Nov 2023 • Shiyu Qin, Bin Chen, Yujun Huang, Baoyi An, Tao Dai, Shu-Tao Xia
The explosion of data has resulted in more and more associated text being transmitted along with images.
no code implementations • 23 Nov 2023 • Shiyu Qin, Yimin Zhou, Jinpeng Wang, Bin Chen, Baoyi An, Tao Dai, Shu-Tao Xia
In this paper, we propose a progressive learning paradigm for transformer-based variable-rate image compression.
no code implementations • 8 Oct 2023 • Yuting Wang, Jinpeng Wang, Bin Chen, Ziyun Zeng, Shu-Tao Xia
In addition, PRVR methods ignore semantic differences between text queries relevant to the same video, leading to a sparse embedding space.
1 code implementation • 20 Sep 2023 • Peiyuan Liu, Beiliang Wu, Naiqi Li, Tao Dai, Fengmao Lei, Jigang Bao, Yong Jiang, Shu-Tao Xia
Recent CNN and Transformer-based models tried to utilize frequency and periodicity information for long-term time series forecasting.
1 code implementation • ICCV 2023 • Guanhao Gan, Yiming Li, Dongxian Wu, Shu-Tao Xia
To protect the copyright of DNNs, backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model by embedding a specific backdoor behavior before releasing it.
1 code implementation • 22 Aug 2023 • Jinpeng Wang, Ziyun Zeng, Yunxiao Wang, Yuting Wang, Xingyu Lu, Tianxiang Li, Jun Yuan, Rui Zhang, Hai-Tao Zheng, Shu-Tao Xia
We propose MISSRec, a multi-modal pre-training and transfer learning framework for SR. On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests while a novel interest-aware decoder is developed to grasp item-modality-interest relations for better sequence representation.
1 code implementation • ICCV 2023 • Jianshuo Dong, Han Qiu, Yiming Li, Tianwei Zhang, Yuanjie Li, Zeqi Lai, Chao Zhang, Shu-Tao Xia
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
1 code implementation • ICCV 2023 • Hao Fang, Bin Chen, Xuan Wang, Zhi Wang, Shu-Tao Xia
Federated Learning (FL) has recently emerged as a promising distributed machine learning framework to preserve clients' privacy, by allowing multiple clients to upload the gradients calculated from their local data to a central server.
1 code implementation • 5 Aug 2023 • Hang Guo, Tao Dai, Mingyan Zhu, Guanghao Meng, Bin Chen, Zhi Wang, Shu-Tao Xia
Current solutions for low-resolution text recognition (LTR) typically rely on a two-stage pipeline that involves super-resolution as the first stage followed by the second-stage recognition.
1 code implementation • 19 Jul 2023 • Hang Guo, Tao Dai, Guanghao Meng, Shu-Tao Xia
Scene text image super-resolution (STISR), aiming to improve image quality while boosting downstream scene text recognition accuracy, has recently achieved great success.
1 code implementation • 23 May 2023 • Ziyun Zeng, Yixiao Ge, Zhan Tong, Xihui Liu, Shu-Tao Xia, Ying Shan
We argue that tuning a text encoder end-to-end, as done in previous work, is suboptimal since it may overfit in terms of styles, thereby losing its original generalization ability to capture the semantics of various language registers.
no code implementations • 18 May 2023 • Taolin Zhang, Sunan He, Dai Tao, Bin Chen, Zhi Wang, Shu-Tao Xia
In recent years, vision language pre-training frameworks have made significant progress in natural language processing and computer vision, achieving remarkable performance improvement on various downstream tasks.
1 code implementation • 11 May 2023 • Yinghua Gao, Yiming Li, Xueluan Gong, Shu-Tao Xia, Qian Wang
More importantly, it is not feasible to simply combine existing methods to design an effective sparse and invisible backdoor attack.
no code implementations • 9 May 2023 • Jiajun Fan, Yuzheng Zhuang, Yuecheng Liu, Jianye Hao, Bin Wang, Jiangcheng Zhu, Hao Wang, Shu-Tao Xia
The exploration problem is one of the main challenges in deep reinforcement learning (RL).
Ranked #1 on
Atari Games
on Atari-57
2 code implementations • ICCV 2023 • Yaohua Zha, Jinpeng Wang, Tao Dai, Bin Chen, Zhi Wang, Shu-Tao Xia
To conquer this limitation, we propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.
no code implementations • 29 Mar 2023 • Mingqing Wang, Jiawei Li, Zhenyang Li, Chengxiao Luo, Bin Chen, Shu-Tao Xia, Zhi Wang
In this work, the VQVAE focus on feature extraction and reconstruction of images, and the transformers fit the manifold and locate anomalies in the latent space.
1 code implementation • CVPR 2023 • Kuofeng Gao, Yang Bai, Jindong Gu, Yong Yang, Shu-Tao Xia
With the split clean data pool and polluted data pool, ASD successfully defends against backdoor attacks during training.
no code implementations • 22 Feb 2023 • Bowen Zhao, Chen Chen, Qian-Wei Wang, Anfeng He, Shu-Tao Xia
For challenge B, we point out that the gradient contribution statistics can be a reliable indicator to inspect whether the optimization is dominated by bias-aligned samples.
1 code implementation • 1 Feb 2023 • Yiming Li, Mengxi Ya, Yang Bai, Yong Jiang, Shu-Tao Xia
Third-party resources ($e. g.$, samples, backbones, and pre-trained models) are usually involved in the training of deep neural networks (DNNs), which brings backdoor attacks as a new training-phase threat.
no code implementations • 30 Jan 2023 • Bowen Zhao, Chen Chen, Shu-Tao Xia
However, we find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
no code implementations • ICCV 2023 • Xinyi Zhang, Naiqi Li, Jiawei Li, Tao Dai, Yong Jiang, Shu-Tao Xia
Unsupervised surface anomaly detection aims at discovering and localizing anomalous patterns using only anomaly-free training samples.
1 code implementation • 2 Nov 2022 • Chengxiao Luo, Yiming Li, Yong Jiang, Shu-Tao Xia
The backdoored model has promising performance in predicting benign samples, whereas its predictions can be maliciously manipulated by adversaries based on activating its backdoors with pre-defined trigger patterns.
1 code implementation • 2 Nov 2022 • Sheng Yang, Yiming Li, Yong Jiang, Shu-Tao Xia
Recent studies have demonstrated that deep neural networks (DNNs) are vulnerable to backdoor attacks during the training process.
no code implementations • 2 Nov 2022 • Tong Xu, Yiming Li, Yong Jiang, Shu-Tao Xia
The backdoor adversaries intend to maliciously control the predictions of attacked DNNs by injecting hidden backdoors that can be activated by adversary-specified trigger patterns during the training process.
no code implementations • 20 Oct 2022 • Qian-Wei Wang, Bowen Zhao, Mingyan Zhu, Tianxiang Li, Zimo Liu, Shu-Tao Xia
In real-world crowdsourcing annotation systems, due to differences in user knowledge and cultural backgrounds, as well as the high cost of acquiring annotation information, the supervision information we obtain might be insufficient and ambiguous.
1 code implementation • 16 Oct 2022 • Yuyuan Zeng, Bowen Zhao, Shanzhao Qiu, Tao Dai, Shu-Tao Xia
Most existing methods mainly focus on extracting global features from tampered images, while neglecting the relationships of local features between tampered and authentic regions within a single tampered image.
no code implementations • 30 Sep 2022 • Wenjie Li, Qiaolin Xia, Hao Cheng, Kouyin Xue, Shu-Tao Xia
Specifically, we build an inference-efficient single-party student model applicable to the whole sample space and meanwhile maintain the advantage of the federated feature extension.
1 code implementation • CVPR 2023 • Ziyun Zeng, Yuying Ge, Xihui Liu, Bin Chen, Ping Luo, Shu-Tao Xia, Yixiao Ge
Pre-training on large-scale video data has become a common recipe for learning transferable spatiotemporal representations in recent years.
1 code implementation • 27 Sep 2022 • Yiming Li, Yang Bai, Yong Jiang, Yong Yang, Shu-Tao Xia, Bo Li
In this paper, we revisit dataset ownership verification.
no code implementations • 6 Sep 2022 • Yujun Huang, Bin Chen, Shiyu Qin, Jiawei Li, YaoWei Wang, Tao Dai, Shu-Tao Xia
Specifically, MSFDPM consists of a side information feature extractor, a multi-scale feature domain patch matching module, and a multi-scale feature fusion network.
1 code implementation • 17 Aug 2022 • Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia
Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e. g., rotation) to construct the poisoned point cloud.
1 code implementation • 7 Aug 2022 • Hongwei Li, Tao Dai, Yiming Li, Xueyi Zou, Shu-Tao Xia
Image representation is critical for many visual tasks.
1 code implementation • 4 Aug 2022 • Yiming Li, Linghui Zhu, Xiaojun Jia, Yang Bai, Yong Jiang, Shu-Tao Xia, Xiaochun Cao
In general, we conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
1 code implementation • 4 Aug 2022 • Yiming Li, Mingyan Zhu, Xue Yang, Yong Jiang, Tao Wei, Shu-Tao Xia
The rapid development of DNNs has benefited from the existence of some high-quality datasets ($e. g.$, ImageNet), which allow researchers and developers to easily verify the performance of their methods.
1 code implementation • 27 Jul 2022 • Jiawang Bai, Kuofeng Gao, Dihong Gong, Shu-Tao Xia, Zhifeng Li, Wei Liu
The security of deep neural networks (DNNs) has attracted increasing attention due to their widespread use in various applications.
1 code implementation • 25 Jul 2022 • Jiawang Bai, Baoyuan Wu, Zhifeng Li, Shu-Tao Xia
Utilizing the latest technique in integer programming, we equivalently reformulate this MIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.
1 code implementation • 5 Jul 2022 • Sunan He, Taian Guo, Tao Dai, Ruizhi Qiao, Bo Ren, Shu-Tao Xia
Specifically, our method exploits multi-modal knowledge of image-text pairs based on a vision and language pre-training (VLP) model.
Ranked #1 on
Multi-label zero-shot learning
on Open Images V4
no code implementations • 31 May 2022 • Wenjie Li, Qiaolin Xia, Junfeng Deng, Hao Cheng, Jiangming Liu, Kouying Xue, Yong Cheng, Shu-Tao Xia
As an emerging secure learning paradigm in lever-aging cross-agency private data, vertical federatedlearning (VFL) is expected to improve advertising models by enabling the joint learning of complementary user attributes privately owned by the advertiser and the publisher.
no code implementations • 19 May 2022 • Qiang Li, Tao Dai, Shu-Tao Xia
Recently, deep learning methods have shown great success in 3D point cloud upsampling.
1 code implementation • 3 Apr 2022 • Jiawang Bai, Li Yuan, Shu-Tao Xia, Shuicheng Yan, Zhifeng Li, Wei Liu
Inspired by this finding, we first investigate the effects of existing techniques for improving ViT models from a new frequency perspective, and find that the success of some techniques (e. g., RandAugment) can be attributed to the better usage of the high-frequency components.
Ranked #2 on
Domain Generalization
on Stylized-ImageNet
no code implementations • 27 Mar 2022 • Neng Wang, Yang Bai, Kun Yu, Yong Jiang, Shu-Tao Xia, Yan Wang
Face forgery has attracted increasing attention in recent applications of computer vision.
no code implementations • 22 Feb 2022 • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama
To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.
1 code implementation • 7 Feb 2022 • Jinpeng Wang, Bin Chen, Dongliang Liao, Ziyun Zeng, Gongfu Li, Shu-Tao Xia, Jin Xu
By performing Asymmetric-Quantized Contrastive Learning (AQ-CL) across views, HCQ aligns texts and videos at coarse-grained and multiple fine-grained levels.
1 code implementation • ICLR 2022 • Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
1 code implementation • ICML Workshop AML 2021 • Yiming Li, Linghui Zhu, Xiaojun Jia, Yong Jiang, Shu-Tao Xia, Xiaochun Cao
In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified \emph{external features}.
no code implementations • NeurIPS 2021 • Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang
Adversarial robustness has received increasing attention along with the study of adversarial examples.
1 code implementation • 25 Nov 2021 • Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang
Adversarial robustness has received increasing attention along with the study of adversarial examples.
1 code implementation • 25 Nov 2021 • Bowen Zhao, Chen Chen, Qian-Wei Wang, Anfeng He, Shu-Tao Xia
For challenge B, we point out that the gradient contribution statistics can be a reliable indicator to inspect whether the optimization is dominated by bias-aligned samples.
1 code implementation • 25 Nov 2021 • Sen yang, Zhicheng Wang, Ze Chen, YanJie Li, Shoukui Zhang, Zhibin Quan, Shu-Tao Xia, Yiping Bao, Erjin Zhou, Wankou Yang
This paper presents a new method to solve keypoint detection and instance association by using Transformer.
Ranked #10 on
Multi-Person Pose Estimation
on COCO
no code implementations • 29 Sep 2021 • Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama
Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.
no code implementations • 29 Sep 2021 • Naiqi Li, Wenjie Li, Yong Jiang, Shu-Tao Xia
In this paper we propose the deep Dirichlet process mixture (DDPM) model, which is an unsupervised method that simultaneously performs clustering and feature learning.
1 code implementation • 18 Sep 2021 • Kuofeng Gao, Jiawang Bai, Bin Chen, Dongxian Wu, Shu-Tao Xia
To this end, we propose the confusing perturbations-induced backdoor attack (CIBA).
1 code implementation • 11 Sep 2021 • Jinpeng Wang, Ziyun Zeng, Bin Chen, Tao Dai, Shu-Tao Xia
The high efficiency in computation and storage makes hashing (including binary hashing and quantization) a common strategy in large-scale retrieval systems.
no code implementations • 11 Sep 2021 • Ziyun Zeng, Jinpeng Wang, Bin Chen, Tao Dai, Shu-Tao Xia
Deep hashing approaches, including deep quantization and deep binary hashing, have become a common solution to large-scale image retrieval due to high computation and storage efficiency.
3 code implementations • 7 Jul 2021 • YanJie Li, Sen yang, Peidong Liu, Shoukui Zhang, Yunxiao Wang, Zhicheng Wang, Wankou Yang, Shu-Tao Xia
The 2D heatmap-based approaches have dominated Human Pose Estimation (HPE) for years due to high performance.
no code implementations • ICML Workshop AML 2021 • Jiawang Bai, Bin Chen, Dongxian Wu, Chaoning Zhang, Shu-Tao Xia
We propose $universal \ adversarial \ head$ (UAH), which crafts adversarial query videos by prepending the original videos with a sequence of adversarial frames to perturb the normal hash codes in the Hamming space.
no code implementations • 12 Jun 2021 • Jiying Zhang, Yuzhao Chen, Xi Xiao, Runiu Lu, Shu-Tao Xia
Hypergraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph-structured data.
no code implementations • 10 Jun 2021 • Jiying Zhang, Yuzhao Chen, Xi Xiao, Runiu Lu, Shu-Tao Xia
HyperGraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph structured data.
1 code implementation • Proceedings of the AAAI Conference on Artificial Intelligence 2021 • Jinpeng Wang, Bin Chen, Qiang Zhang, Zaiqiao Meng, Shangsong Liang, Shu-Tao Xia
Deep quantization methods have shown high efficiency on large-scale image retrieval.
1 code implementation • ICCV 2021 • YanJie Li, Shoukui Zhang, Zhicheng Wang, Sen yang, Wankou Yang, Shu-Tao Xia, Erjin Zhou
Most existing CNN-based methods do well in visual representation, however, lacking in the ability to explicitly learn the constraint relationships between keypoints.
no code implementations • 6 Apr 2021 • Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, Shu-Tao Xia
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
1 code implementation • ICLR 2021 • Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang
The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).
no code implementations • 6 Mar 2021 • Yiming Li, YanJie Li, Yalei Lv, Yong Jiang, Shu-Tao Xia
Deep neural networks (DNNs) are vulnerable to the \emph{backdoor attack}, which intends to embed hidden backdoors in DNNs by poisoning training data.
2 code implementations • ICLR 2021 • Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.
1 code implementation • NeurIPS 2020 • Naiqi Li, Wenjie Li, Jifeng Sun, Yinghua Gao, Yong Jiang, Shu-Tao Xia
In this paper we propose Stochastic Deep Gaussian Processes over Graphs (DGPG), which are deep structure models that learn the mappings between input and output signals in graph domains.
1 code implementation • 22 Oct 2020 • Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, Shu-Tao Xia
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
no code implementations • 18 Oct 2020 • Xingchun Xiang, Qingtao Tang, Huaixuan Zhang, Tao Dai, Jiawei Li, Shu-Tao Xia
To address this issue, we propose a novel regression tree, named James-Stein Regression Tree (JSRT) by considering global information from different nodes.
no code implementations • 16 Oct 2020 • Shudeng Wu, Tao Dai, Shu-Tao Xia
Recently, deep neural networks (DNNs) have been widely and successfully used in Object Detection, e. g.
2 code implementations • 12 Oct 2020 • Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, Shu-Tao Xia
Based on the proposed backdoor-based watermarking, we use a hypothesis test guided method for dataset verification based on the posterior probability generated by the suspicious third-party model of the benign samples and their correspondingly watermarked samples ($i. e.$, images with trigger) on the target class.
1 code implementation • ECCV 2020 • Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, Weiwei Guo
Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting).
no code implementations • 21 Aug 2020 • Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Shu-Tao Xia
Interpretability and effectiveness are two essential and indispensable requirements for adopting machine learning methods in reality.
no code implementations • 14 Aug 2020 • Jie Fang, Jian-Wu Lin, Shu-Tao Xia, Yong Jiang, Zhikang Xia, Xiang Liu
This paper proposes Neural Network-based Automatic Factor Construction (NNAFC), a tailored neural network framework that can automatically construct diversified financial factors based on financial domain knowledge and a variety of neural network structures.
1 code implementation • 17 Jul 2020 • Yiming Li, Yong Jiang, Zhifeng Li, Shu-Tao Xia
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), so that the attacked models perform well on benign samples, whereas their predictions will be maliciously changed if the hidden backdoor is activated by attacker-specified triggers.
1 code implementation • ECCV 2020 • Haoyu Liang, Zhihao Ouyang, Yuyuan Zeng, Hang Su, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang
Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model.
no code implementations • 1 Jul 2020 • Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-Tao Xia
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
2 code implementations • ECCV 2020 • Jiawang Bai, Bin Chen, Yiming Li, Dongxian Wu, Weiwei Guo, Shu-Tao Xia, En-hui Yang
In this paper, we propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
3 code implementations • NeurIPS 2020 • Dongxian Wu, Shu-Tao Xia, Yisen Wang
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years.
no code implementations • 9 Apr 2020 • Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, Shu-Tao Xia
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples.
no code implementations • 26 Mar 2020 • Xianbin Lv, Dongxian Wu, Shu-Tao Xia
Probabilistic modeling, which consists of a classifier and a transition matrix, depicts the transformation from true labels to noisy labels and is a promising approach.
1 code implementation • 16 Mar 2020 • Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia
In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.
no code implementations • 26 Feb 2020 • Yan Feng, Bin Chen, Tao Dai, Shu-Tao Xia
Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks due to its efficiency of encoding high-dimensional visual features especially when dealing with large-scale datasets.
1 code implementation • 23 Feb 2020 • Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia, Rongxing Lu
However, the strong defence ability and high learning accuracy of these schemes cannot be ensured at the same time, which will impede the wide application of FL in practice (especially for medical or financial institutions that require both high accuracy and strong privacy guarantee).
2 code implementations • ICLR 2020 • Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.
no code implementations • 26 Dec 2019 • Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Zhikang Xia, Xiang Liu, Yong Jiang
This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators based on prior knowledge.
no code implementations • 8 Dec 2019 • Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Yong Jiang
According to neural network universal approximation theorem, pre-training can conduct a more effective and explainable evolution process.
1 code implementation • CVPR 2020 • Bowen Zhao, Xi Xiao, Guojun Gan, Bin Zhang, Shu-Tao Xia
In this paper, we demonstrate it can indeed help the model to output more discriminative results within old classes.
Ranked #2 on
Incremental Learning
on ImageNet100 - 10 steps
(# M Params metric)
1 code implementation • 5 Nov 2019 • Yiming Li, Peidong Liu, Yong Jiang, Shu-Tao Xia
To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relation provides a great amount of information and can be used in other scenarios.
no code implementations • 5 Nov 2019 • Peidong Liu, Xiyu Yan, Yong Jiang, Shu-Tao Xia
The deep learning-based visual tracking algorithms such as MDNet achieve high performance leveraging to the feature extraction ability of a deep neural network.
no code implementations • 27 Oct 2019 • Jia Xu, Yiming Li, Yong Jiang, Shu-Tao Xia
In this paper, we define the local flatness of the loss surface as the maximum value of the chosen norm of the gradient regarding to the input within a neighborhood centered on the benign sample, and discuss the relationship between the local flatness and adversarial vulnerability.
no code implementations • 25 Sep 2019 • Haoyu Liang, Zhihao Ouyang, Hang Su, Yuyuan Zeng, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang
Convolutional neural networks (CNNs) have often been treated as “black-box” and successfully used in a range of tasks.
1 code implementation • 17 Sep 2019 • Hongshan Li, Yu Guo, Zhi Wang, Shu-Tao Xia, Wenwu Zhu
Then we train the agent in a reinforcement learning way to adapt it for different deep learning cloud services that act as the {\em interactive training environment} and feeding a reward with comprehensive consideration of accuracy and data size.
Multimedia Image and Video Processing
no code implementations • 15 Aug 2019 • Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, Shu-Tao Xia
In addition, label regularization techniques such as label smoothing and label disturbance have also been proposed with the motivation of adding a stochastic perturbation to labels.
1 code implementation • 17 Jul 2019 • Yiming Li, Yang Zhang, Qingtao Tang, Weipeng Huang, Yong Jiang, Shu-Tao Xia
$k$-means algorithm is one of the most classical clustering methods, which has been widely and successfully used in signal processing.
no code implementations • 13 Apr 2019 • Bowen Zhao, Xi Xiao, Wanpeng Zhang, Bin Zhang, Shu-Tao Xia
There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA).
no code implementations • 14 Mar 2019 • Jiawang Bai, Yiming Li, Jiawei Li, Yong Jiang, Shu-Tao Xia
How to obtain a model with good interpretability and performance has always been an important research topic.
no code implementations • 10 Mar 2019 • Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Chun Li, Shu-Tao Xia
Despite the impressive performance of random forests (RF), its theoretical properties have not been thoroughly understood.
no code implementations • NeurIPS 2018 • Songtao Wang, Dan Li, Yang Cheng, Jinkun Geng, Yanshu Wang, Shuai Wang, Shu-Tao Xia, Jianping Wu
In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training.
no code implementations • WS 2018 • Jilei Wang, Shiying Luo, Weiyan Shi, Tao Dai, Shu-Tao Xia
Learning vector space representation of words (i. e., word embeddings) has recently attracted wide research interests, and has been extended to cross-lingual scenario.
2 code implementations • ICML 2018 • Xingjun Ma, Yisen Wang, Michael E. Houle, Shuo Zhou, Sarah M. Erfani, Shu-Tao Xia, Sudanthi Wijewickrema, James Bailey
Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs).
Ranked #39 on
Image Classification
on mini WebVision 1.0
1 code implementation • CVPR 2018 • Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia
We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.
no code implementations • 21 Apr 2016 • Chaobing Song, Shu-Tao Xia
In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory.
no code implementations • 15 Apr 2016 • Chaobing Song, Shu-Tao Xia
In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly.
no code implementations • 25 Nov 2015 • Yisen Wang, Chaobing Song, Shu-Tao Xia
In this paper, a Tsallis Entropy Criterion (TEC) algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index, which generalizes the split criteria of decision trees.