no code implementations • Findings (NAACL) 2022 • Yaqing Wang, Xin Tian, Haoyi Xiong, Yueyang Li, Zeyu Chen, Sheng Guo, Dejing Dou
In this work, we show that Relation Graph augmented Learning (RGL) can improve the performance of few-shot natural language understanding tasks.
1 code implementation • ICML 2020 • Hai Phan, My T. Thai, Han Hu, Ruoming Jin, Tong Sun, Dejing Dou
In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in adversarial learning for deep neural networks (DNNs), with certified robustness to adversarial examples.
no code implementations • Findings (EMNLP) 2021 • Qiuhao Lu, Dejing Dou, Thien Huu Nguyen
These knowledge adapters are pre-trained for individual domain knowledge sources and integrated via an attention-based knowledge controller to enrich PLMs.
no code implementations • EMNLP 2021 • Zeru Zhang, Zijie Zhang, Yang Zhou, Lingfei Wu, Sixing Wu, Xiaoying Han, Dejing Dou, Tianshi Che, Da Yan
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks.
no code implementations • 24 Feb 2023 • Yuxuan Zhang, Qingzhong Wang, Jiang Bian, Yi Liu, Yanwu Xu, Dejing Dou, Haoyi Xiong
Due to the high similarity between MRI data and videos, we conduct extensive empirical studies on video recognition techniques for MRI classification to answer the questions: (1) can we directly use video recognition models for MRI classification, (2) which model is more appropriate for MRI, (3) are the common tricks like data augmentation in video recognition still useful for MRI classification?
1 code implementation • 8 Jan 2023 • Yan Li, Xinjiang Lu, Yaqing Wang, Dejing Dou
In this work, we propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder (BVAE) equipped with diffusion, denoise, and disentanglement, namely D3VAE.
1 code implementation • 5 Jan 2023 • Miao Chen, Xinjiang Lu, Tong Xu, Yanyan Li, Jingbo Zhou, Dejing Dou, Hui Xiong
Although remarkable progress on the neural table-to-text methods has been made, the generalization issues hinder the applicability of these models due to the limited source tables.
1 code implementation • 5 Jan 2023 • Yan Li, Xinjiang Lu, Haoyi Xiong, Jian Tang, Jiantao Su, Bo Jin, Dejing Dou
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
no code implementations • 20 Dec 2022 • Siyu Huang, Tianyang Wang, Haoyi Xiong, Bihan Wen, Jun Huan, Dejing Dou
Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this paper we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
1 code implementation • 19 Dec 2022 • Qingrui Jia, Xuhong LI, Lei Yu, Jiang Bian, Penghao Zhao, Shupeng Li, Haoyi Xiong, Dejing Dou
While mislabeled or ambiguously-labeled samples in the training set could negatively affect the performance of deep models, diagnosing the dataset and identifying mislabeled samples helps to improve the generalization power.
no code implementations • 26 Nov 2022 • Congxi Xiao, Jingbo Zhou, Jizhou Huang, HengShu Zhu, Tong Xu, Dejing Dou, Hui Xiong
The core idea of such a framework is to firstly pre-train a basis (or master) model over the URG, and then to adaptively derive specific (or slave) models from the basis model for different regions.
no code implementations • 24 Nov 2022 • Ji Liu, Juncheng Jia, Beichen Ma, Chendi Zhou, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou
The system model enables a parallel training process of multiple jobs, with a cost model based on the data fairness and the training time of diverse devices during the parallel training process.
no code implementations • 13 Nov 2022 • Qiuhao Lu, Dejing Dou, Thien Huu Nguyen
Deep learning models have demonstrated superior performance in various healthcare applications.
1 code implementation • 8 Aug 2022 • Jingbo Zhou, Xinjiang Lu, Yixiong Xiao, Jiantao Su, Junfu Lyu, Yanjun Ma, Dejing Dou
Thus, Wind Power Forecasting (WPF) has been widely recognized as one of the most critical issues in wind power integration and operation.
no code implementations • 26 Jul 2022 • Jiang Bian, Qingzhong Wang, Haoyi Xiong, Jun Huang, Chen Liu, Xuhong LI, Jun Cheng, Jun Zhao, Feixiang Lu, Dejing Dou
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.
1 code implementation • 14 Jul 2022 • Ji Liu, daxiang dong, Xi Wang, An Qin, Xingjian Li, Patrick Valduriez, Dejing Dou, dianhai yu
Although more layers and more parameters generally improve the accuracy of the models, such big models generally have high computational complexity and require big memory, which exceed the capacity of small devices for inference and incurs long training time.
no code implementations • 14 Jul 2022 • Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, Dejing Dou
The federated learning (FL) framework enables edge clients to collaboratively learn a shared inference model while keeping privacy of training data on clients.
no code implementations • 4 Jul 2022 • Xueying Zhan, Zeyu Dai, Qingzhong Wang, Qing Li, Haoyi Xiong, Dejing Dou, Antoni B. Chan
In this paper, we propose a sampling scheme, Monte-Carlo Pareto Optimization for Active Learning (POAL), which selects optimal subsets of unlabeled samples with fixed batch size from the unlabeled data pool.
1 code implementation • 4 Jul 2022 • Xuhong LI, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, Dejing Dou
Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i. e., backbone+segmentation modules) in an end-to-end manner.
no code implementations • 21 Jun 2022 • Guanghao Li, Yue Hu, Miao Zhang, Ji Liu, Quanjun Yin, Yong Peng, Dejing Dou
As the efficiency of training in the ring topology prefers devices with homogeneous resources, the classification based on the computing capacity mitigates the impact of straggler effects.
no code implementations • 12 Jun 2022 • Hang Hua, Xingjian Li, Dejing Dou, Cheng-Zhong Xu, Jiebo Luo
The advent of large-scale pre-trained language models has contributed greatly to the recent progress in natural language processing.
1 code implementation • 2 Jun 2022 • Fei Wu, Qingzhong Wang, Jian Bian, Haoyi Xiong, Ning Ding, Feixiang Lu, Jun Cheng, Dejing Dou
Finally, we discuss the challenges and unsolved problems in this area and to facilitate sports analytics, we develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.
no code implementations • 26 May 2022 • Xingjian Li, Pengkun Yang, Tianyang Wang, Xueying Zhan, Min Xu, Dejing Dou, Chengzhong Xu
Uncertainty estimation for unlabeled data is crucial to active learning.
no code implementations • 26 May 2022 • Xiao Zhang, Dejing Dou, Ji Wu
To study the feature forgetting problem, we create a synthetic dataset to identify and visualize the prevalence of feature forgetting in neural networks.
no code implementations • 20 May 2022 • Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin
To handle the diverse query requests from users at web-scale, Baidu has done tremendous efforts in understanding users' queries, retrieve relevant contents from a pool of trillions of webpages, and rank the most relevant webpages on the top of results.
no code implementations • 25 Apr 2022 • Hong Zhang, Ji Liu, Juncheng Jia, Yang Zhou, Huaiyu Dai, Dejing Dou
Despite achieving remarkable performance, Federated Learning (FL) suffers from two critical challenges, i. e., limited computational resources and low training efficiency.
no code implementations • 6 Apr 2022 • Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, Dejing Dou
Furthermore, we propose to leverage the available protein language model pretrained on protein sequences to enhance the self-supervised learning.
1 code implementation • 25 Mar 2022 • Xueying Zhan, Qingzhong Wang, Kuan-Hao Huang, Haoyi Xiong, Dejing Dou, Antoni B. Chan
In this work, We construct a DAL toolkit, DeepAL+, by re-implementing 19 highly-cited DAL methods.
no code implementations • 9 Mar 2022 • Andong Deng, Xingjian Li, Zhibing Li, Di Hu, Chengzhong Xu, Dejing Dou
Based on the contradictory phenomenon between FE and FT that better feature extractor fails to be fine-tuned better accordingly, we conduct comprehensive analyses on features before softmax layer to provide insightful explanations.
no code implementations • 2 Mar 2022 • Yi Gu, Hongzhi Cheng, Kafeng Wang, Dejing Dou, Chengzhong Xu, Hui Kong
In this paper, we propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
no code implementations • 19 Dec 2021 • Qiuhao Lu, Thien Huu Nguyen, Dejing Dou
Unplanned intensive care unit (ICU) readmission rate is an important metric for evaluating the quality of hospital care.
no code implementations • 11 Dec 2021 • Chendi Zhou, Ji Liu, Juncheng Jia, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou
However, the scheduling of devices for multiple jobs with FL remains a critical and open problem.
1 code implementation • NeurIPS 2021 • Can Chen, Shuhao Zheng, Xi Chen, Erqun Dong, Xue (Steve) Liu, Hao liu, Dejing Dou
To be specific, GDW unrolls the loss gradient to class-level gradients by the chain rule and reweights the flow of each gradient separately.
no code implementations • NeurIPS 2021 • Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou
Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks.
1 code implementation • 20 Nov 2021 • Ji Liu, Zhihua Wu, dianhai yu, Yanjun Ma, Danlei Feng, Minxu Zhang, Xinxuan Wu, Xuefeng Yao, Dejing Dou
The training process generally exploits distributed computing resources to reduce training time.
1 code implementation • EMNLP 2021 • Yaqing Wang, Song Wang, Quanming Yao, Dejing Dou
Short text classification is a fundamental task in natural language processing.
1 code implementation • 29 Oct 2021 • Can Chen, Shuhao Zheng, Xi Chen, Erqun Dong, Xue Liu, Hao liu, Dejing Dou
To be specific, GDW unrolls the loss gradient to class-level gradients by the chain rule and reweights the flow of each gradient separately.
no code implementations • 24 Oct 2021 • Kafeng Wang, Haoyi Xiong, Jie Zhang, Hongyang Chen, Dejing Dou, Cheng-Zhong Xu
Extensive experiment based on real-word field deployment (on the highways in Shenzhen, China) shows that SenseMag significantly outperforms the existing methods in both classification accuracy and the granularity of vehicle types (i. e., 7 types by SenseMag versus 4 types by the existing work in comparisons).
no code implementations • 7 Oct 2021 • Haiyan Jiang, Haoyi Xiong, Dongrui Wu, Ji Liu, Dejing Dou
Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.
no code implementations • 6 Oct 2021 • Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou
Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.
1 code implementation • 24 Sep 2021 • Shuangli Li, Jingbo Zhou, Tong Xu, Dejing Dou, Hui Xiong
Though graph contrastive learning (GCL) methods have achieved extraordinary performance with insufficient labeled data, most focused on designing data augmentation schemes for general graphs.
no code implementations • 2 Sep 2021 • Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou
Existing interpretation algorithms have found that, even deep models make the same and right predictions on the same image, they might rely on different sets of input features for classification.
1 code implementation • ICCV 2021 • Siyu Huang, Tianyang Wang, Haoyi Xiong, Jun Huan, Dejing Dou
To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset.
1 code implementation • 21 Jul 2021 • Shuangli Li, Jingbo Zhou, Tong Xu, Liang Huang, Fan Wang, Haoyi Xiong, Weili Huang, Dejing Dou, Hui Xiong
To this end, we propose a structure-aware interactive graph neural network (SIGN) which consists of two components: polar-inspired graph attention layers (PGAL) and pairwise interactive pooling (PiPool).
no code implementations • NeurIPS 2021 • Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, Dejing Dou
the target property, such that the limited labels can be effectively propagated among similar molecules.
no code implementations • 12 Jul 2021 • Weijia Zhang, Hao liu, Lijun Zha, HengShu Zhu, Ji Liu, Dejing Dou, Hui Xiong
Real estate appraisal refers to the process of developing an unbiased opinion for real property's market value, which plays a vital role in decision-making for various players in the marketplace (e. g., real estate agents, appraisers, lenders, and buyers).
no code implementations • NAACL 2021 • Hang Hua, Xingjian Li, Dejing Dou, Cheng-Zhong Xu, Jiebo Luo
The brittleness of this process is often reflected by the sensitivity to random seeds.
no code implementations • 2 Jul 2021 • Zhiyuan Wang, Haoyi Xiong, Jie Zhang, Sijia Yang, Mehdi Boukhechba, Laura E. Barnes, Daqing Zhang, Dejing Dou
Mobile Sensing Apps have been widely used as a practical approach to collect behavioral and health-related information from individuals and provide timely intervention to promote health and well-beings, such as mental health and chronic cares.
no code implementations • 25 Jun 2021 • Haiyan Jiang, Shuyu Li, Luwei Zhang, Haoyi Xiong, Dejing Dou
Compared with existing algorithms, the proposed GRMF can automatically learn the grouping structure and sparsity in MF without prior knowledge, by introducing a naturally adjustable non-convex regularization to achieve simultaneous sparsity and grouping effect.
no code implementations • 20 Jun 2021 • Xuanyu Wu, Xuhong LI, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou
Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the generalization performance of deep models in complement with a testing set.
1 code implementation • 3 Jun 2021 • Hao liu, Qian Gao, Jiang Li, Xiaochao Liao, Hao Xiong, Guangxing Chen, Wenlin Wang, Guobao Yang, Zhiwei Zha, daxiang dong, Dejing Dou, Haoyi Xiong
In this work, we present JIZHI - a Model-as-a-Service system - that per second handles hundreds of millions of online inference requests to huge deep models with more than trillions of sparse parameters, for over twenty real-time recommendation services at Baidu, Inc.
no code implementations • 29 Apr 2021 • Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou
Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.
1 code implementation • 16 Apr 2021 • Gong Zhang, Yang Zhou, Sixing Wu, Zeru Zhang, Dejing Dou
With the guidance of known aligned entities in the context of multiple random walks, an adversarial knowledge translation model is developed to fill and translate masked entities in pairwise random walks from two KGs.
no code implementations • EACL 2021 • Nisansa de Silva, Dejing Dou
Social networks face a major challenge in the form of rumors and fake news, due to their intrinsic nature of connecting users to millions of others, and of giving any individual the power to post anything.
1 code implementation • CVPR 2021 • Jie An, Siyu Huang, Yibing Song, Dejing Dou, Wei Liu, Jiebo Luo
The forward inference projects input images into deep features, while the backward inference remaps deep features back to input images in a lossless and unbiased way.
no code implementations • 25 Mar 2021 • Xingjian Li, Haoyi Xiong, Chengzhong Xu, Dejing Dou
Performing mixup for transfer learning with pre-trained models however is not that simple, a high capacity pre-trained model with a large fully-connected (FC) layer could easily overfit to the target dataset even with samples-to-labels mixed up.
1 code implementation • 19 Mar 2021 • Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou
Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms.
1 code implementation • CVPR 2021 • Abulikemu Abuduweili, Xingjian Li, Humphrey Shi, Cheng-Zhong Xu, Dejing Dou
To better exploit the value of both pre-trained weights and unlabeled target examples, we introduce adaptive consistency regularization that consists of two complementary components: Adaptive Knowledge Consistency (AKC) on the examples between the source and target model, and Adaptive Representation Consistency (ARC) on the target model between labeled and unlabeled examples.
1 code implementation • 15 Feb 2021 • Weijia Zhang, Hao liu, Fan Wang, Tong Xu, Haoran Xin, Dejing Dou, Hui Xiong
Electric Vehicle (EV) has become a preferable choice in the modern transportation system due to its environmental and energy sustainability.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 29 Jan 2021 • Haoran Xin, Xinjiang Lu, Tong Xu, Hao liu, Jingjing Gu, Dejing Dou, Hui Xiong
Second, a user-specific travel intention is formulated as an aggregation combining home-town preference and generic travel intention together, where the generic travel intention is regarded as a mixture of inherent intentions that can be learned by Neural Topic Model (NTM).
no code implementations • 1 Jan 2021 • Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu
The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.
no code implementations • 1 Jan 2021 • Xuhong LI, Haoyi Xiong, Siyu Huang, Shilei Ji, Yanjie Fu, Dejing Dou
Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e. g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting.
no code implementations • 1 Jan 2021 • Xiao Zhang, Di Hu, Xingjian Li, Dejing Dou, Ji Wu
We demonstrate using model information as a general analysis tool to gain insight into problems that arise in deep learning.
no code implementations • 1 Jan 2021 • Xiao Zhang, Dejing Dou, Ji Wu
We provide a practical distance measure in the space of functions parameterized by neural networks.
no code implementations • 1 Jan 2021 • Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu
Random label noises (or observational noises) widely exist in practical machinelearning settings.
no code implementations • 1 Jan 2021 • Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou
While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.
no code implementations • 30 Dec 2020 • Jindong Han, Hao liu, HengShu Zhu, Hui Xiong, Dejing Dou
Specifically, we first propose a heterogeneous recurrent graph neural network to model the spatiotemporal autocorrelation among air quality and weather monitoring stations.
no code implementations • 22 Dec 2020 • Congxi Xiao, Jingbo Zhou, Jizhou Huang, An Zhuo, Ji Liu, Haoyi Xiong, Dejing Dou
Furthermore, to transfer the firsthand knowledge (witted in epicenters) to the target city before local outbreaks, we adopt a novel adversarial encoder framework to learn "city-invariant" representations from the mobility-related features for precise early detection of high-risk neighborhoods, even before any confirmed cases known, in the target city.
1 code implementation • 17 Dec 2020 • Jingbo Zhou, Shuangli Li, Liang Huang, Haoyi Xiong, Fan Wang, Tong Xu, Hui Xiong, Dejing Dou
The hierarchical attentive aggregation can capture spatial dependencies among atoms, as well as fuse the position-enhanced information with the capability of discriminating multiple spatial relations among atoms.
1 code implementation • 14 Dec 2020 • Dong Wang, Di Hu, Xingjian Li, Dejing Dou
The main reason is that large number of nodes (i. e., video frames) makes GCNs hard to capture and model temporal relations in videos.
Ranked #17 on
Action Segmentation
on Breakfast
no code implementations • NeurIPS 2020 • Zijie Zhang, Zeru Zhang, Yang Zhou, Yelong Shen, Ruoming Jin, Dejing Dou
Despite achieving remarkable performance, deep graph learning models, such as node classification and network embedding, suffer from harassment caused by small adversarial perturbations.
no code implementations • COLING 2020 • Qiuhao Lu, Nisansa de Silva, Dejing Dou, Thien Huu Nguyen, Prithviraj Sen, Berthold Reinwald, Yunyao Li
Network representation learning (NRL) is crucial in the area of graph learning.
no code implementations • EMNLP 2020 • Amir Pouran Ben Veyseh, Nasim Nouri, Franck Dernoncourt, Dejing Dou, Thien Huu Nguyen
In this work, we propose to incorporate the syntactic structures of the sentences into the deep learning models for TOWE, leveraging the syntax-based opinion possibility scores and the syntactic connections between the words.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Amir Pouran Ben Veyseh, Nasim Nour, Franck Dernoncourt, Quan Hung Tran, Dejing Dou, Thien Huu Nguyen
In addition, we propose a mechanism to obtain the importance scores for each word in the sentences based on the dependency trees that are then injected into the model to improve the representation vectors for ABSA.
no code implementations • 16 Oct 2020 • Xingjian Li, Di Hu, Xuhong LI, Haoyi Xiong, Zhi Ye, Zhipeng Wang, Chengzhong Xu, Dejing Dou
Fine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples.
1 code implementation • NeurIPS 2020 • Di Hu, Rui Qian, Minyue Jiang, Xiao Tan, Shilei Wen, Errui Ding, Weiyao Lin, Dejing Dou
First, we propose to learn robust object representations by aggregating the candidate sound localization results in the single source scenes.
no code implementations • 16 Sep 2020 • Xiao Zhang, Xingjian Li, Dejing Dou, Ji Wu
We propose a practical measure of the generalizable information in a neural network model based on prequential coding, which we term Information Transfer ($L_{IT}$).
no code implementations • 20 Jul 2020 • Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou
While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy.
1 code implementation • 17 Jul 2020 • Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou
Generation of high-quality person images is challenging, due to the sophisticated entanglements among image factors, e. g., appearance, pose, foreground, background, local details, global structures, etc.
no code implementations • 13 Jul 2020 • Xuhong Li, Yves GRANDVALET, Rémi Flamary, Nicolas Courty, Dejing Dou
We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks.
1 code implementation • ICML 2020 • Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning, while the effects of randomization can be easily converged throughout the overall learning procedure.
no code implementations • ACL 2020 • Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou, Thien Huu Nguyen
In order to overcome these issues, we propose a novel deep learning model for RE that uses the dependency trees to extract the syntax-based importance scores for the words, serving as a tree representation to introduce syntactic information into the models with greater generalization.
1 code implementation • ECCV 2020 • Di Hu, Xuhong LI, Lichao Mou, Pu Jin, Dong Chen, Liping Jing, Xiaoxiang Zhu, Dejing Dou
With the help of this dataset, we evaluate three proposed approaches for transferring the sound event knowledge to the aerial scene recognition task in a multimodal learning framework, and show the benefit of exploiting the audio information for the aerial scene recognition.
1 code implementation • 14 May 2020 • Di Hu, Lichao Mou, Qingzhong Wang, Junyu. Gao, Yuansheng Hua, Dejing Dou, Xiao Xiang Zhu
Visual crowd counting has been recently studied as a way to enable people counting in crowd scenes from images.
no code implementations • 6 May 2020 • Jizhou Huang, Haifeng Wang, Haoyi Xiong, Miao Fan, An Zhuo, Ying Li, Dejing Dou
While these strategies have effectively dealt with the critical situations of outbreaks, the combination of the pandemic and mobility controls has slowed China's economic growth, resulting in the first quarterly decline of Gross Domestic Product (GDP) since GDP began to be calculated, in 1992.
no code implementations • ICLR 2020 • Kafeng Wang, Xitong Gao, Yiren Zhao, Xingjian Li, Dejing Dou, Cheng-Zhong Xu
Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance.
no code implementations • 26 Apr 2020 • Xingjian Li, Haoyi Xiong, Haozhe An, Dejing Dou, Chengzhong Xu
Softening labels of training datasets with respect to data representations has been frequently used to improve the training of deep neural networks (DNNs).
2 code implementations • 1 Apr 2020 • Phung Lai, NhatHai Phan, Han Hu, Anuja Badeti, David Newman, Dejing Dou
In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models.
1 code implementation • 17 Mar 2020 • Siyu Huang, Haoyi Xiong, Tianyang Wang, Bihan Wen, Qingzhong Wang, Zeyu Chen, Jun Huan, Dejing Dou
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer, which includes a regularization term for matching the semantics between input contents and stylized outputs.
no code implementations • 25 Feb 2020 • Xien Liu, Song Wang, Xiao Zhang, Xinxin You, Ji Wu, Dejing Dou
In this study, we propose a label-guided learning framework LguidedLearn for text representation and classification.
no code implementations • 26 Jan 2020 • Di Hu, Zheng Wang, Haoyi Xiong, Dong Wang, Feiping Nie, Dejing Dou
Associating sound and its producer in complex audiovisual scene is a challenging task, especially when we are lack of annotated training data.
no code implementations • 30 Dec 2019 • Xin Zhou, Dejing Dou, Boyang Li
Search space is a key consideration for neural architecture search.
1 code implementation • 7 Dec 2019 • Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li
We demonstrate that training the networks to have interpretable gradients improves their robustness to adversarial perturbations.
1 code implementation • 5 Nov 2019 • Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou, Thien Huu Nguyen
In this work, we propose a novel model for DE that simultaneously performs the two tasks in a single framework to benefit from their inter-dependencies.
no code implementations • 25 Sep 2019 • NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.
no code implementations • 25 Sep 2019 • Xiao Zhang, Song Wang, Dejing Dou, Xien Liu, Thien Huu Nguyen, Ji Wu
Contextual representation models like BERT have achieved state-of-the-art performance on a diverse range of NLP tasks.
no code implementations • ICLR 2020 • Isaac Ahern, Adam Noack, Luis Guzman-Nateras, Dejing Dou, Boyang Li, Jun Huan
The problem of explaining deep learning models, and model predictions generally, has attracted intensive interest recently.
no code implementations • 23 Aug 2019 • Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei
Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.
no code implementations • 16 Aug 2019 • Xiao Zhang, Dejing Dou, Ji Wu
External knowledge is often useful for natural language understanding tasks.
no code implementations • 7 Jul 2019 • Amir Pouran Ben Veyseh, Thien Huu Nguyen, Dejing Dou
The current deep learning models for relation extraction has mainly exploited this dependency information by guiding their computation along the structures of the dependency trees.
1 code implementation • ACL 2019 • Amir Pouran Ben Veyseh, Thien Huu Nguyen, Dejing Dou
In this work, we introduce a novel graph-based neural network for EFP that can integrate the semantic and syntactic information more effectively.
4 code implementations • 2 Jun 2019 • NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples.
no code implementations • 23 Mar 2019 • NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.
Cryptography and Security
no code implementations • 9 Mar 2019 • Pengwei Wang, Dejing Dou, Fangzhao Wu, Nisansa de Silva, Lianwen Jin
And then, to put both triples and mined logic rules within the same semantic space, all triples in the knowledge graph are represented as first-order logic.
no code implementations • ACL 2019 • Xiao Zhang, Ji Wu, Dejing Dou
Evaluation also confirms the tuned word embeddings have better semantic properties.
3 code implementations • COLING 2018 • Javid Ebrahimi, Daniel Lowd, Dejing Dou
Evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models.
2 code implementations • ACL 2018 • Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou
We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier.
2 code implementations • 18 Sep 2017 • NhatHai Phan, Xintao Wu, Han Hu, Dejing Dou
In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks.
2 code implementations • 25 Jun 2017 • NhatHai Phan, Xintao Wu, Dejing Dou
However, only a few scientific studies on preserving privacy in deep learning have been conducted.
no code implementations • COLING 2016 • Javid Ebrahimi, Dejing Dou, Daniel Lowd
Classifying the stance expressed in online microblogging social media is an emerging problem in opinion mining.
no code implementations • 12 Jul 2015 • Shangpu Jiang, Daniel Lowd, Dejing Dou
In this paper, we focus on a novel knowledge reuse scenario where the knowledge in the source schema needs to be translated to a semantically heterogeneous target schema.
no code implementations • 11 Jul 2015 • Shangpu Jiang, Daniel Lowd, Dejing Dou
We use a probabilistic framework to integrate this new knowledge-based strategy with standard terminology-based and structure-based strategies.