1 code implementation • Findings (ACL) 2022 • Sen yang, Leyang Cui, Ruoxi Ning, Di wu, Yue Zhang
Neural constituency parsers have reached practical performance on news-domain benchmarks.
no code implementations • 6 Jun 2022 • Jiajia Zhou, Junbin Zhuang, Yan Zheng, Di wu
As this network make "Haar Images into Fusion Images", it is called HIFI-Net.
1 code implementation • 27 May 2022 • Siyuan Li, Di wu, Fang Wu, Zelin Zang, Kai Wang, Lei Shang, Baigui Sun, Hao Li, Stan. Z. Li
We observe that MIM essentially teaches the model to learn better middle-level interactions among patches and extract more generalized features.
no code implementations • 26 May 2022 • Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg
To characterize the robustness of state-of-the-art learned image compression, we mount white and black-box attacks.
no code implementations • 24 May 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
In this work, we propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL.
1 code implementation • 19 May 2022 • Jiuqi Elise Zhang, Di wu, Benoit Boulet
Time series anomaly detection is of critical importance for the reliable and efficient operation of real-world systems.
no code implementations • 1 May 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
One of the key points of EFD is developing a generic model to extract robust and discriminative features from different equipment for early fault detection.
no code implementations • 27 Apr 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
To address this problem, many transfer learning based EFD methods utilize historical data to learn transferable domain knowledge and conduct early fault detection on new target bearings.
1 code implementation • 20 Apr 2022 • Di wu, Siyuan Li, Jie Yang, Mohamad Sawan
Extensive data labeling on neurophysiological signals is often prohibitively expensive or impractical, as it may require particular infrastructure or domain expertise.
no code implementations • 16 Apr 2022 • Di wu, Yi He, Xin Luo
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system.
no code implementations • 16 Apr 2022 • Di wu, Peng Zhang, Yi He, Xin Luo
High-dimensional and sparse (HiDS) matrices are omnipresent in a variety of big data-related applications.
no code implementations • 2 Apr 2022 • Jia Chen, Di wu, Xin Luo
High-dimensional and sparse (HiDS) matrices are frequently adopted to describe the complex relationships in various big data-related systems and applications.
1 code implementation • 29 Mar 2022 • BinBin Zhang, Di wu, Zhendong Peng, Xingchen Song, Zhuoyuan Yao, Hang Lv, Lei Xie, Chao Yang, Fuping Pan, Jianwei Niu
Recently, we made available WeNet, a production-oriented end-to-end speech recognition toolkit, which introduces a unified two-pass (U2) framework and a built-in runtime to address the streaming and non-streaming decoding modes in a single model.
no code implementations • 25 Mar 2022 • Tao Fu, Huifen Zhou, Xu Ma, Z. Jason Hou, Di wu
In this study, we develop a supervised machine learning approach to generate 1) the probability of the next operation day containing the peak hour of the month and 2) the probability of an hour to be the peak hour of the day.
1 code implementation • CVPR 2022 • Xuehui Yu, Pengfei Chen, Di wu, Najmul Hassan, Guorong Li, Junchi Yan, Humphrey Shi, Qixiang Ye, Zhenjun Han
In this study, we propose a POL method using coarse point annotations, relaxing the supervision signals from accurate key points to freely spotted points.
1 code implementation • 15 Mar 2022 • Di wu, Wasi Uddin Ahmad, Sunipa Dev, Kai-Wei Chang
State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data.
no code implementations • 11 Mar 2022 • Di wu, Cheng Chen, Xiujun Chen, Junwei Pan, Xun Yang, Qing Tan, Jian Xu, Kuang-Chih Lee
In order to address the unstable traffic pattern challenge and achieve the optimal overall outcome, we propose a multi-agent reinforcement learning method to adjust the bids from each guaranteed contract, which is simple, converging efficiently and scalable.
no code implementations • 25 Feb 2022 • Di wu, Jie Yang, Mohamad Sawan
The proposed training scheme significantly improves the performance of patient-specific seizure predictors and bridges the gap between patient-specific and patient-independent predictors.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Ce Yang, Weihao Gao, Di wu, Chong Wang
Simulation of the dynamics of physical systems is essential to the development of both science and engineering.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Christian Gagné, Eric Eaton
In this paper, we introduce the notion of \emph{performance gap}, an intuitive and novel measure of the distance between learning tasks.
1 code implementation • 8 Jan 2022 • Arec Jamgochian, Di wu, Kunal Menda, Soyeon Jung, Mykel J. Kochenderfer
In this paper, we introduce the conditional approximate normalizing flow (CANF) to make probabilistic multi-step time-series forecasts when correlations are present over long time horizons.
no code implementations • 31 Dec 2021 • Xuehui Yu, Di wu, Qixiang Ye, Jianbin Jiao, Zhenjun Han
As a result, we propose a point self-refinement approach that iteratively updates point annotations in a self-paced way.
1 code implementation • 5 Dec 2021 • Xingtai Gui, Di wu, Yang Chang, Shicai Fan
Anomaly detection aims to separate anomalies from normal samples, and the pretrained network is promising for anomaly detection.
1 code implementation • 30 Nov 2021 • Siyuan Li, Zicheng Liu, Di wu, Zihan Liu, Stan Z. Li
Mixup is a popular data-dependent augmentation technique for deep neural networks, which contains two sub-tasks, mixup generation and classification.
Ranked #5 on
Image Classification
on Tiny ImageNet Classification
no code implementations • 15 Nov 2021 • Xingshuai Huang, Di wu, Michael Jenkin, Benoit Boulet
Traffic signal control is of critical importance for the effective use of transportation infrastructures.
1 code implementation • 2 Nov 2021 • Rehmat Ullah, Di wu, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese
However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server.
no code implementations • 27 Oct 2021 • Siyuan Li, Zelin Zang, Di wu, ZhiYuan Chen, Stan Z. Li
Specifically, we provide a general method to model data structures by adaptively combining graph distances on the feature space and predefined graphs, then propose robust loss functions to learn the low-dimensional embedding.
no code implementations • 26 Oct 2021 • Di wu, Yi Shi, Ziyu Wang, Jie Yang, Mohamad Sawan
Although compressive sensing (CS) can be adopted to compress the signals to reduce communication bandwidth requirement, it needs a complex reconstruction procedure before the signal can be used for seizure prediction.
1 code implementation • 7 Oct 2021 • BinBin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, Di wu, Zhendong Peng
In this paper, we present WenetSpeech, a multi-domain Mandarin corpus consisting of 10000+ hours high-quality labeled speech, 2400+ hours weakly labeled speech, and about 10000 hours unlabeled speech, with 22400+ hours in total.
Ranked #4 on
Speech Recognition
on WenetSpeech
no code implementations • 29 Sep 2021 • Siyuan Li, Zicheng Liu, Di wu, Stan Z. Li
In this paper, we decompose mixup into two sub-tasks of mixup generation and classification and formulate it for discriminative representations as class- and instance-level mixup.
no code implementations • 29 Sep 2021 • Yuwei Fu, Di wu, Benoit Boulet
Through extensive experiments on the standard batch RL datasets, we find that non-uniform sampling is also effective in batch RL settings.
no code implementations • 29 Sep 2021 • Di wu, Tianyu Li, David Meger, Michael Jenkin, Xue Liu, Gregory Dudek
Unfortunately, most online reinforcement learning algorithms require a large number of interactions with the environment to learn a reliable control policy.
no code implementations • EMNLP 2021 • Liang Ding, Di wu, DaCheng Tao
We present a simple and effective pretraining strategy -- bidirectional training (BiT) for neural machine translation.
no code implementations • 24 Jul 2021 • Liang Ding, Di wu, DaCheng Tao
Our constrained system is based on a pipeline framework, i. e. ASR and NMT.
no code implementations • 16 Jul 2021 • Jiuqi, Zhang, Di wu, Benoit Boulet
With the rapid increase in the integration of renewable energy generation and the wide adoption of various electric appliances, power grids are now faced with more and more challenges.
1 code implementation • 9 Jul 2021 • Di wu, Rehmat Ullah, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese
Further, FedAdapt adopts reinforcement learning based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth.
1 code implementation • 5 Jul 2021 • Yipeng Zhou, Xuezheng Liu, Yao Fu, Di wu, Chao Li, Shui Yu
In this work, we study a crucial question which has been vastly overlooked by existing works: what are the optimal numbers of queries and replies in FL with DP so that the final model accuracy is maximized.
no code implementations • 30 Jun 2021 • Di wu, Siyuan Li, Zelin Zang, Kai Wang, Lei Shang, Baigui Sun, Hao Li, Stan Z. Li
In this paper, we first point out that current contrastive methods are prone to memorizing background/foreground texture and therefore have a limitation in localizing the foreground object.
no code implementations • 28 Jun 2021 • Huiliang Zhang, Sayani Seal, Di wu, Benoit Boulet, Francois Bouffard, Geza Joos
Building energy management is one of the core problems in modern power grids to reduce energy consumption while ensuring occupants' comfort.
no code implementations • 10 Jun 2021 • Di wu, BinBin Zhang, Chao Yang, Zhendong Peng, Wenjing Xia, Xiaoyu Chen, Xin Lei
On the experiment of AISHELL-1, we achieve a 4. 63\% character error rate (CER) with a non-streaming setup and 5. 05\% with a streaming setup with 320ms latency by U2++.
1 code implementation • 27 Apr 2021 • Zelin Zang, Siyuan Li, Di wu, Jianzhu Guo, Yongjie Xu, Stan Z. Li
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space.
Ranked #2 on
Node Clustering
on Wiki
no code implementations • 22 Apr 2021 • Di wu, XiaoFeng Xie, Xiang Ni, Bin Fu, Hanhui Deng, Haibo Zeng, Zhijin Qin
We further present an experiment on data anomaly detection in this architecture, and the comparison between two architectures for ECG diagnosis.
no code implementations • 13 Apr 2021 • Di wu, Yiren Chen, Liang Ding, DaCheng Tao
Spoken language understanding (SLU) system usually consists of various pipeline components, where each component heavily relies on the results of its upstream ones.
no code implementations • 29 Mar 2021 • Valentina Popescu, Abhinav Venigalla, Di wu, Robert Schreiber
While neural networks have been trained using IEEE-754 binary32 arithmetic, the rapid growth of computational demands in deep learning has boosted interest in faster, low precision training.
1 code implementation • 24 Mar 2021 • Zicheng Liu, Siyuan Li, Di wu, Zihan Liu, ZhiYuan Chen, Lirong Wu, Stan Z. Li
Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i. e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework.
Ranked #6 on
Image Classification
on Tiny ImageNet Classification
no code implementations • 11 Mar 2021 • Miao Hu, Xianzhuo Luo, Jiawen Chen, Young Choon Lee, Yipeng Zhou, Di wu
Virtual Reality (VR) has shown great potential to revolutionize the market by providing users immersive experiences with freedom of movement.
Networking and Internet Architecture
1 code implementation • 7 Mar 2021 • Linghan Meng, Yanhui Li, Lin Chen, Zhi Wang, Di wu, Yuming Zhou, Baowen Xu
To tackle this problem, we propose Sample Discrimination based Selection (SDS) to select efficient samples that could discriminate multiple models, i. e., the prediction behaviors (right/wrong) of these samples would be helpful to indicate the trend of model performance.
no code implementations • 19 Feb 2021 • Di wu, Yong Zeng, Shi Jin, Rui Zhang
Two instances of CKM are proposed for beam alignment in mmWave massive MIMO systems, namely channel path map (CPM) and beam index map (BIM).
no code implementations • IWSLT (ACL) 2022 • Di wu, Liang Ding, Shuo Yang, Mingyang Li
Recently, the performance of the neural word alignment models has exceeded that of statistical models.
3 code implementations • 2 Feb 2021 • Zhuoyuan Yao, Di wu, Xiong Wang, BinBin Zhang, Fan Yu, Chao Yang, Zhendong Peng, Xiaoyu Chen, Lei Xie, Xin Lei
In this paper, we propose an open source, production first, and production ready speech recognition toolkit called WeNet in which a new two-pass approach is implemented to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model.
no code implementations • 11 Jan 2021 • Yao Fu, Yipeng Zhou, Di wu, Shui Yu, Yonggang Wen, Chao Li
Then, we theoretically derive: 1) the conditions for the DP based FedAvg to converge as the number of global iterations (GI) approaches infinity; 2) the method to set the number of local iterations (LI) to minimize the negative influence of DP noises.
no code implementations • 1 Jan 2021 • Di wu, Liang Ding, Shuo Yang, DaCheng Tao
Recently, the performance of the neural word alignment models has exceeded that of statistical models.
5 code implementations • 10 Dec 2020 • BinBin Zhang, Di wu, Zhuoyuan Yao, Xiong Wang, Fan Yu, Chao Yang, Liyong Guo, Yaguang Hu, Lei Xie, Xin Lei
In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model.
Ranked #3 on
Speech Recognition
on AISHELL-1
1 code implementation • COLING 2020 • Liang Ding, Longyue Wang, Di wu, DaCheng Tao, Zhaopeng Tu
Non-autoregressive translation (NAT) significantly accelerates the inference process by predicting the entire target sequence.
no code implementations • International Conference on Security and Privacy in Digital Economy 2020 • Ying Zhao, Junjun Chen, Qianling Guo, Jian Teng, Di wu
In the second learning stage, Ot uses the transfer learning method to reconstruct and re-train the model to further improve the detection performance on the specific task.
no code implementations • 19 Oct 2020 • Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou
Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.
Distributed, Parallel, and Cluster Computing
2 code implementations • 17 Oct 2020 • Boyuan Ma, Xiang Yin, Di wu, Xiaojuan Ban
In this work, to handle the requirements of both output image quality and comprehensive simplicity of structure implementation, we propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
1 code implementation • EMNLP 2020 • Di wu, Liang Ding, Fan Lu, Jian Xie
Slot filling and intent detection are two main tasks in spoken language understanding (SLU) system.
no code implementations • 22 Sep 2020 • Shuai Yu, Xu Chen, Zhi Zhou, Xiaowen Gong, Di wu
Ultra-dense edge computing (UDEC) has great potential, especially in the 5G era, but it still faces challenges in its current solutions, such as the lack of: i) efficient utilization of multiple 5G resources (e. g., computation, communication, storage and service resources); ii) low overhead offloading decision making and resource allocation strategies; and iii) privacy and security protection schemes.
no code implementations • 16 Sep 2020 • Zhi Wang, Chaoge Liu, Xiang Cui, Jiaxi Liu, Di wu, Jie Yin
Experiments on Twitter show that command-embedded contents can be generated efficiently, and bots can find botmasters and obtain commands accurately.
1 code implementation • 30 Jun 2020 • Di Wu, Qi Tang, Yongle Zhao, Ming Zhang, Ying Fu, Debing Zhang
The 8 bits quantization has been widely applied to accelerate network inference in various deep learning applications.
no code implementations • 4 May 2020 • Ning Gao, Yong Zeng, Jian Wang, Di wu, Chaoyue Zhang, Qingheng Song, Jiachen Qian, Shi Jin
In this paper, via extensive flight experiments, we aim to firstly validate the recently derived theoretical energy model for rotary-wing UAVs, and then develop a general model for those complicated flight scenarios where rigorous theoretical model derivation is quite challenging, if not impossible.
no code implementations • 17 Mar 2020 • Di Wu, Yihao Chen, Xianbiao Qi, Yongjian Yu, Weixuan Chen, Rong Xiao
We utilise the overlay between the accurate mask prediction and less accurate mesh prediction to iteratively optimise the direct regressed 6D pose information with a focus on translation estimation.
no code implementations • 6 Feb 2020 • Di Wu, Huayan Wan, Siping Liu, Weiren Yu, Zhanpeng Jin, Dakuo Wang
The "mind-controlling" capability has always been in mankind's fantasy.
no code implementations • 29 Dec 2019 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
DeepRelations shows superior interpretability to the state-of-the-art: without compromising affinity prediction, it boosts the AUPRC of contact prediction 9. 5, 16. 9, 19. 3 and 5. 7-fold for the test, compound-unique, protein-unique, and both-unique sets, respectively.
no code implementations • 23 Nov 2019 • Di Wu, Chao Wang, Yong Wu, De-Shuang Huang
Besides, most of the multi-scale models embedding the multi-scale feature learning block into the feature extraction deep network, which reduces the efficiency of inference network.
1 code implementation • The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019 2019 • Di Wu, Zhaoyong Zhuang, Canqun Xiang, Wenbin Zou and Xia Li
We present a conceptually simple framework for 6DoF object pose estimation, especially for autonomous driving scenario.
no code implementations • 5 Jun 2019 • Min Chen, Ping Zhou, Di wu, Long Hu, Mohammad Mehedi Hassan, Atif Alamri
First, the wide collection of data in the close-loop information flow of user and remote medical data center is discussed.
no code implementations • 13 Dec 2018 • Di Wu, Hong-Wei Yang, De-Shuang Huang
Most of them focus on learning the part feature representation of person body in horizontal direction.
no code implementations • EMNLP 2018 • Yufeng Diao, Hongfei Lin, Di wu, Liang Yang, Kan Xu, Zhihao Yang, Jian Wang, Shaowu Zhang, Bo Xu, Dongyu Zhang
In this work, we first use WordNet to understand and expand word embedding for settling the polysemy of homographic puns, and then propose a WordNet-Encoded Collocation-Attention network model (WECA) which combined with the context weights for recognizing the puns.
no code implementations • 26 Sep 2018 • Di Wu, Kun Zhang, Fei Cheng, Yang Zhao, Qi Liu, Chang-An Yuan, De-Shuang Huang
As a basic task of multi-camera surveillance system, person re-identification aims to re-identify a query pedestrian observed from non-overlapping multiple cameras or across different time with a single camera.
no code implementations • 16 Sep 2018 • Zhishuai Han, Xiaojuan Ban, Xiaokun Wang, Di wu
Dynamic hand tracking and gesture recognition is a hard task since there are many joints on the fingers and each joint owns many degrees of freedom.
Human-Computer Interaction
no code implementations • 10 Sep 2018 • Di Wu, Cheng Chen, Xun Yang, Xiujun Chen, Qing Tan, Jian Xu, Kun Gai
With this formulation, we derive the optimal impression allocation strategy by solving the optimal bidding functions for contracts.
2 code implementations • 20 Jun 2018 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
Motivation: Drug discovery demands rapid quantification of compound-protein interaction (CPI).
no code implementations • 23 Feb 2018 • Di Wu, Xiujun Chen, Xun Yang, Hao Wang, Qing Tan, Xiaoxun Zhang, Jian Xu, Kun Gai
Our analysis shows that the immediate reward from environment is misleading under a critical resource constraint.
no code implementations • 31 Dec 2017 • Zhijian Liu, Di wu, Hongyu Wei, Guoqing Cao
It is indicated that the theories and applications of machine learning method in the field of energy conservation and indoor environment are not mature, due to the difficulty of the determination for model structure with better prediction.
1 code implementation • 2 Aug 2017 • Di Wu, Wenbin Zou, Xia Li, Yong Zhao
Visual tracking is intrinsically a temporal problem.
no code implementations • Neurocomputing 2017 • Di wu, Mingsheng Shang, Xin Luo a, Ji Xu, Huyong Yan, Weihui Deng, Guoyin Wang
Having a multitude of unlabeled data and few labeled ones is a common problem in many practical ap- plications.
no code implementations • CVPR 2014 • Di Wu, Ling Shao
Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data.