2 code implementations • Findings (ACL) 2022 • Sen yang, Leyang Cui, Ruoxi Ning, Di wu, Yue Zhang
Neural constituency parsers have reached practical performance on news-domain benchmarks.
no code implementations • 18 Sep 2023 • Renhe Chen, Albert Lee, ZiRui Wang, Di wu, Xufeng Kou
This brief introduces a read bias circuit to improve readout yield of magnetic random access memories (MRAMs).
no code implementations • 2 Aug 2023 • Yuzhu Li, Nir Pillar, Jingxi Li, Tairan Liu, Di wu, Songyu Sun, Guangdong Ma, Kevin De Haan, Luzhe Huang, Sepehr Hamidi, Anatoly Urisman, Tal Keidar Haran, William Dean Wallace, Jonathan E. Zuckerman, Aydogan Ozcan
Histological examination is a crucial step in an autopsy; however, the traditional histochemical staining of post-mortem samples faces multiple challenges, including the inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, as well as the resource-intensive nature of chemical staining procedures covering large tissue areas, which demand substantial labor, cost, and time.
1 code implementation • 22 Jul 2023 • Di wu, Pengfei Chen, Xuehui Yu, Guorong Li, Zhenjun Han, Jianbin Jiao
Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects).
no code implementations • 18 Jul 2023 • Xingyue Ma, L. Bellaiche, Di wu, Yurong Yang
The first-principles-based effective Hamiltonian is widely used to predict and simulate the properties of ferroelectrics and relaxor ferroelectrics.
1 code implementation • 23 Jun 2023 • Amal Feriani, Di wu, Steve Liu, Greg Dudek
This work offers a comprehensive and unified framework to help researchers evaluate and design data-driven channel estimation algorithms.
no code implementations • 19 Jun 2023 • Liping Zhang, Di wu, Xin Luo
Then, based on the idea of stacking ensemble, long short-term memory is employed as an error correction module to forecast the components separately, and the forecast results are treated as new features to be fed into extreme gradient boosting for the second-step forecasting.
no code implementations • 31 May 2023 • Yan Wang, Feng Shu, Zhihong Zhuang, Rongen Dong, Qi Zhang, Di wu, Liang Yang, Jiangzhou Wang
Numerical simulation results show that a 3-bit discrete phase shifter is required to achieve a trivial performance loss for a large-scale active IRS.
1 code implementation • 24 May 2023 • Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti
We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.
no code implementations • 23 May 2023 • Di wu, Christof Monz
Using a shared vocabulary is common practice in Multilingual Neural Machine Translation (MNMT).
no code implementations • 22 May 2023 • Chu-Kuan Jiang, Yang-Fan Deng, Hongxiao Guo, Guang-Hao Chen, Di wu
Typical pretreated wastewater was synthesized with chemical oxygen demand of 110 mg/L, sulfate of 50 mg S/L, and varying dissolved oxygen (DO) and was fed into a moving-bed biofilm reactor (MBBR).
no code implementations • 18 May 2023 • Xingchen Song, Di wu, BinBin Zhang, Zhendong Peng, Bo Dang, Fuping Pan, Zhiyong Wu
In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding Prompt-and-Refine strategy (Figure 3), two simple but effective \textbf{training-free} methods to decrease the Token Display Time (TDT) of streaming ASR models \textbf{without any accuracy loss}.
no code implementations • 10 May 2023 • Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di wu
In this paper, we propose a new PFL algorithm called \emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address the above problem, which leverages the parameter server (PS) to compute personalized aggregation weights based on collected models from clients.
no code implementations • 9 May 2023 • Yunchao Yang, Yipeng Zhou, Miao Hu, Di wu, Quan Z. Sheng
The challenge of this problem lies in the opaque feedback between reward budget allocation and model utility improvement of FL, making the optimal reward budget allocation complicated.
no code implementations • 27 Mar 2023 • Di wu, Da Yin, Kai-Wei Chang
Despite the significant advancements in keyphrase extraction and keyphrase generation methods, the predominant approach for evaluation only relies on exact matching with human references and disregards reference-free attributes.
no code implementations • 25 Mar 2023 • Miao Hu, Zhenxiao Luo, Amirmohammad Pasdar, Young Choon Lee, Yipeng Zhou, Di wu
Edge computing has been getting a momentum with ever-increasing data at the edge of the network.
no code implementations • 22 Mar 2023 • Borui Cai, Yong Xiang, Longxiang Gao, Di wu, He Zhang, Jiong Jin, Tom Luan
Specifically, we view the concatenation of all entity representations as an embedding layer, and then conventional KGE methods that adopt high-dimensional entity representations equal to enlarging the width of the embedding layer to gain expressiveness.
no code implementations • 22 Mar 2023 • Abhisek Konar, Di wu, Yi Tian Xu, Seowoo Jang, Steve Liu, Gregory Dudek
Engineering this reward function is challenging, because it involves the need for expert knowledge and there lacks a general consensus on the form of an optimal reward function.
no code implementations • 22 Mar 2023 • Yi Tian Xu, Jimmy Li, Di wu, Michael Jenkin, Seowoo Jang, Xue Liu, Gregory Dudek
When deploying to an unknown traffic scenario, we select a policy from the policy bank based on the similarity between the previous-day traffic of the current scenario and the traffic observed during training.
no code implementations • 14 Mar 2023 • Jikun Kang, Di wu, Ju Wang, Ekram Hossain, Xue Liu, Gregory Dudek
In cellular networks, User Equipment (UE) handoff from one Base Station (BS) to another, giving rise to the load balancing problem among the BSs.
no code implementations • 11 Mar 2023 • Xijuan Sun, Di wu, Arnaud Zinflou, Benoit Boulet
Usually, machine learning-based methods need to model the normal data distribution.
no code implementations • 25 Feb 2023 • Ruiyang Xu, Di wu, Xin Luo
Traditional feature selections need to know the feature space before learning, and online streaming feature selection (OSFS) is proposed to process streaming features on the fly.
no code implementations • 7 Feb 2023 • Huiliang Zhang, Di wu, Benoit Boulet
Safety has been recognized as the central obstacle to preventing the use of reinforcement learning (RL) for real-world applications.
no code implementations • 3 Feb 2023 • Igor Kozlov, Dmitriy Rivkin, Wei-Di Chang, Di wu, Xue Liu, Gregory Dudek
Such networks undergo frequent and often heterogeneous changes caused by network operators, who are seeking to tune their system parameters for optimal performance.
no code implementations • 12 Jan 2023 • Amir Farakhor, Di wu, Yebin Wang, Huazhen Fang
An optimal power management approach is developed to extensively exploit the merits of the proposed design.
no code implementations • CVPR 2023 • Xinglin Li, Jiajing Chen, Jinhui Ouyang, Hanhui Deng, Senem Velipasalar, Di wu
Recent years have witnessed significant developments in point cloud processing, including classification and segmentation.
no code implementations • 20 Dec 2022 • Baopu Qiu, Liang Ding, Di wu, Lin Shang, Yibing Zhan, DaCheng Tao
Machine Translation Quality Estimation (QE) is the task of evaluating translation output in the absence of human-written references.
1 code implementation • 20 Dec 2022 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models.
no code implementations • 20 Dec 2022 • Cheng Liang, Teng Huang, Yi He, Song Deng, Di wu, Xin Luo
The idea of the proposed MMA is mainly two-fold: 1) apply different $L_p$-norm on loss function and regularization to form different variant models in different metric spaces, and 2) aggregate these variant models.
no code implementations • 16 Dec 2022 • Rongxing Hu, Kai Ye, Hyeonjin Kim, Hanpyo Lee, Ning Lu, Di wu, PJ Rehm
This paper presents a coordinative demand charge mitigation (DCM) strategy for reducing electricity consumption during system peak periods.
no code implementations • CVPR 2023 • Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu
UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.
no code implementations • 9 Dec 2022 • Kai Ye, Hyeonjin Kim, Yi Hu, Ning Lu, Di wu, PJ Rehm
This paper presents a modified sequence-to-point (S2P) algorithm for disaggregating the heat, ventilation, and air conditioning (HVAC) load from the total building electricity consumption.
no code implementations • 29 Nov 2022 • Yiyan Li, Lidong Song, Yi Hu, Hanpyo Lee, Di wu, PJ Rehm, Ning Lu
We propose a Generator structure consisting of a coarse network and a fine-tuning network.
no code implementations • 16 Nov 2022 • Qi Guo, Yong Qi, Saiyu Qi, Di wu
To our knowledge, we are the first to present an FSSL method that utilizes only 10\% labeled clients, while still achieving superior performance compared to standard federated supervised learning, which uses all clients with labeled data.
4 code implementations • 7 Nov 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di wu, ZhiYuan Chen, Jiangbin Zheng, Stan Z. Li
Since the recent success of Vision Transformers (ViTs), explorations toward ViT-style architectures have triggered the resurgence of ConvNets.
Ranked #1 on
Instance Segmentation
on COCO test-dev
(AP50 metric)
no code implementations • 7 Nov 2022 • Han Pyo Lee, Lidong Song, Yiyan Li, Ning Lu, Di wu, PJ Rehm, Matthew Makdad, Edmond Miller
The proposed algorithm consists of two key steps: selection of similar days and iterative bidirectional-GB training.
2 code implementations • 1 Nov 2022 • Xingchen Song, Di wu, Zhiyong Wu, BinBin Zhang, Yuekai Zhang, Zhendong Peng, Wenpeng Li, Fuping Pan, Changbao Zhu
In this paper, we present TrimTail, a simple but effective emission regularization method to improve the latency of streaming ASR models.
no code implementations • 31 Oct 2022 • Xingchen Song, Di wu, BinBin Zhang, Zhiyong Wu, Wenpeng Li, Dongfang Li, Pengshen Zhang, Zhendong Peng, Fuping Pan, Changbao Zhu, Zhongqin Wu
Therefore, we name it FusionFormer.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 23 Oct 2022 • Huiliang Zhang, Di wu, Benoit Boulet
The building sector has been recognized as one of the primary sectors for worldwide energy consumption.
no code implementations • 12 Oct 2022 • Wenjian Hao, Bowen Huang, Wei Pan, Di wu, Shaoshuai Mou
This paper presents a data-driven approach to approximate the dynamics of a nonlinear time-varying system (NTVS) by a linear time-varying system (LTVS), which is resulted from the Koopman operator and deep neural networks.
no code implementations • 3 Oct 2022 • Di wu, Jie Yang, Mohamad Sawan
In this survey, we assess the eligibility of more than fifty published peer-reviewed representative transfer learning approaches for EMG applications.
no code implementations • 19 Sep 2022 • Hyeonjin Kim, Kai Ye, Han Pyo Lee, Rongxing Hu, Ning Lu, Di wu, PJ Rehm
The residual load profiles are processed using ICA for HVAC load extraction.
1 code implementation • 11 Sep 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Di wu, Stan Z. Li
With the remarkable progress of deep neural networks in computer vision, data mixing augmentation techniques are widely studied to alleviate problems of degraded generalization when the amount of training data is limited.
1 code implementation • 25 Aug 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants.
no code implementations • 23 Aug 2022 • Qi Guo, Yong Qi, Saiyu Qi, Di wu, Qian Li
Federated learning (FL) facilitates multiple clients to jointly train a machine learning model without sharing their private data.
no code implementations • 16 Aug 2022 • Yuting Ding, Di wu
In the past decade, scholars have done many research on the recovery of missing traffic data, however how to make full use of spatio-temporal traffic patterns to improve the recovery performance is still an open problem.
no code implementations • 13 Aug 2022 • Yuanyi Liu, Jia Chen, Di wu
The A2BAS algorithm consists of two sub-algorithms.
no code implementations • 2 Aug 2022 • Xin Cheng, Feng Shu, YiFan Li, Zhihong Zhuang, Di wu, Jiangzhou Wang
In this paper, optimal geometrical configurations of UAVs in received signal strength (RSS)-based localization under region constraints are investigated.
no code implementations • 2 Aug 2022 • Feilong Chen, Di wu, Jie Yang, Yi He
In many real applications such as intelligent healthcare platform, streaming feature always has some missing data, which raises a crucial challenge in conducting OSFS, i. e., how to establish the uncertain relationship between sparse streaming features and labels.
no code implementations • 14 Jul 2022 • Ningkun Zheng, Xin Qin, Di wu, Gabe Murtaugh, Bolun Xu
Combined with an optimal bidding design algorithm using dynamic programming, our paper shows that the SoC segment market model provides more accurate representations of the opportunity costs of energy storage compared to existing power-based bidding models.
2 code implementations • 7 Jul 2022 • Zelin Zang, Siyuan Li, Di wu, Ge Wang, Lei Shang, Baigui Sun, Hao Li, Stan Z. Li
To overcome the underconstrained embedding problem, we design a loss and theoretically demonstrate that it leads to a more suitable embedding based on the local flatness.
Ranked #2 on
Image Classification
on ImageNet-100
no code implementations • AAAI Conference on Artificial Intelligence 2022 • Yuwei Fu, Di wu, Benoit Boulet
To deal with this challenge, we propose a reinforcement learning (RL) based model combination (RLMC) framework for determining model weights in an ensemble for time series forecasting tasks.
no code implementations • 6 Jun 2022 • Jiajia Zhou, Junbin Zhuang, Yan Zheng, Di wu
As this network make "Haar Images into Fusion Images", it is called HIFI-Net.
3 code implementations • 27 May 2022 • Siyuan Li, Di wu, Fang Wu, Zelin Zang, Stan. Z. Li
We then propose an Architecture-Agnostic Masked Image Modeling framework (A$^2$MIM), which is compatible with both Transformers and CNNs in a unified way.
no code implementations • 26 May 2022 • Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg
To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks.
no code implementations • 24 May 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
In this work, we propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL.
1 code implementation • 19 May 2022 • Jiuqi Elise Zhang, Di wu, Benoit Boulet
Time series anomaly detection has been recognized as of critical importance for the reliable and efficient operation of real-world systems.
no code implementations • 1 May 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
One of the key points of EFD is developing a generic model to extract robust and discriminative features from different equipment for early fault detection.
no code implementations • 27 Apr 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
To address this problem, many transfer learning based EFD methods utilize historical data to learn transferable domain knowledge and conduct early fault detection on new target bearings.
1 code implementation • 20 Apr 2022 • Di wu, Siyuan Li, Jie Yang, Mohamad Sawan
Extensive data labeling on neurophysiological signals is often prohibitively expensive or impractical, as it may require particular infrastructure or domain expertise.
no code implementations • 16 Apr 2022 • Di wu, Yi He, Xin Luo
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system.
no code implementations • 16 Apr 2022 • Di wu, Peng Zhang, Yi He, Xin Luo
High-dimensional and sparse (HiDS) matrices are omnipresent in a variety of big data-related applications.
no code implementations • 2 Apr 2022 • Jia Chen, Di wu, Xin Luo
High-dimensional and sparse (HiDS) matrices are frequently adopted to describe the complex relationships in various big data-related systems and applications.
3 code implementations • 29 Mar 2022 • BinBin Zhang, Di wu, Zhendong Peng, Xingchen Song, Zhuoyuan Yao, Hang Lv, Lei Xie, Chao Yang, Fuping Pan, Jianwei Niu
Recently, we made available WeNet, a production-oriented end-to-end speech recognition toolkit, which introduces a unified two-pass (U2) framework and a built-in runtime to address the streaming and non-streaming decoding modes in a single model.
no code implementations • 25 Mar 2022 • Tao Fu, Huifen Zhou, Xu Ma, Z. Jason Hou, Di wu
In this study, we develop a supervised machine learning approach to generate 1) the probability of the next operation day containing the peak hour of the month and 2) the probability of an hour to be the peak hour of the day.
2 code implementations • CVPR 2022 • Xuehui Yu, Pengfei Chen, Di wu, Najmul Hassan, Guorong Li, Junchi Yan, Humphrey Shi, Qixiang Ye, Zhenjun Han
In this study, we propose a POL method using coarse point annotations, relaxing the supervision signals from accurate key points to freely spotted points.
1 code implementation • 15 Mar 2022 • Di wu, Wasi Uddin Ahmad, Sunipa Dev, Kai-Wei Chang
State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data.
no code implementations • 11 Mar 2022 • Di wu, Cheng Chen, Xiujun Chen, Junwei Pan, Xun Yang, Qing Tan, Jian Xu, Kuang-Chih Lee
In order to address the unstable traffic pattern challenge and achieve the optimal overall outcome, we propose a multi-agent reinforcement learning method to adjust the bids from each guaranteed contract, which is simple, converging efficiently and scalable.
no code implementations • 25 Feb 2022 • Di wu, Jie Yang, Mohamad Sawan
The proposed training scheme significantly improves the performance of patient-specific seizure predictors and bridges the gap between patient-specific and patient-independent predictors.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Ce Yang, Weihao Gao, Di wu, Chong Wang
Simulation of the dynamics of physical systems is essential to the development of both science and engineering.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
1 code implementation • 8 Jan 2022 • Arec Jamgochian, Di wu, Kunal Menda, Soyeon Jung, Mykel J. Kochenderfer
In this paper, we introduce the conditional approximate normalizing flow (CANF) to make probabilistic multi-step time-series forecasts when correlations are present over long time horizons.
no code implementations • 31 Dec 2021 • Xuehui Yu, Di wu, Qixiang Ye, Jianbin Jiao, Zhenjun Han
As a result, we propose a point self-refinement approach that iteratively updates point annotations in a self-paced way.
1 code implementation • 5 Dec 2021 • Xingtai Gui, Di wu, Yang Chang, Shicai Fan
Anomaly detection aims to separate anomalies from normal samples, and the pretrained network is promising for anomaly detection.
Ranked #8 on
Anomaly Detection
on One-class CIFAR-10
1 code implementation • 30 Nov 2021 • Siyuan Li, Zicheng Liu, Zedong Wang, Di wu, Zihan Liu, Stan Z. Li
Accordingly, we propose $\eta$-balanced mixup loss for complementary learning of the two sub-objectives.
Ranked #7 on
Image Classification
on Places205
no code implementations • 15 Nov 2021 • Xingshuai Huang, Di wu, Michael Jenkin, Benoit Boulet
Traffic signal control is of critical importance for the effective use of transportation infrastructures.
1 code implementation • 2 Nov 2021 • Rehmat Ullah, Di wu, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese
Our empirical results on the CIFAR10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL.
1 code implementation • 27 Oct 2021 • Siyuan Li, Zicheng Liu, Zelin Zang, Di wu, ZhiYuan Chen, Stan Z. Li
Unsupervised representation learning (URL) that learns compact embeddings of high-dimensional data without supervision has achieved remarkable progress recently.
no code implementations • 26 Oct 2021 • Di wu, Yi Shi, Ziyu Wang, Jie Yang, Mohamad Sawan
Although compressive sensing (CS) can be adopted to compress the signals to reduce communication bandwidth requirement, it needs a complex reconstruction procedure before the signal can be used for seizure prediction.
1 code implementation • 7 Oct 2021 • BinBin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, Di wu, Zhendong Peng
In this paper, we present WenetSpeech, a multi-domain Mandarin corpus consisting of 10000+ hours high-quality labeled speech, 2400+ hours weakly labeled speech, and about 10000 hours unlabeled speech, with 22400+ hours in total.
Ranked #5 on
Speech Recognition
on WenetSpeech
no code implementations • 29 Sep 2021 • Di wu, Tianyu Li, David Meger, Michael Jenkin, Xue Liu, Gregory Dudek
Unfortunately, most online reinforcement learning algorithms require a large number of interactions with the environment to learn a reliable control policy.
no code implementations • 29 Sep 2021 • Siyuan Li, Zicheng Liu, Di wu, Stan Z. Li
In this paper, we decompose mixup into two sub-tasks of mixup generation and classification and formulate it for discriminative representations as class- and instance-level mixup.
no code implementations • 29 Sep 2021 • Yuwei Fu, Di wu, Benoit Boulet
Through extensive experiments on the standard batch RL datasets, we find that non-uniform sampling is also effective in batch RL settings.
no code implementations • EMNLP 2021 • Liang Ding, Di wu, DaCheng Tao
We present a simple and effective pretraining strategy -- bidirectional training (BiT) for neural machine translation.
no code implementations • 24 Jul 2021 • Liang Ding, Di wu, DaCheng Tao
Our constrained system is based on a pipeline framework, i. e. ASR and NMT.
no code implementations • 16 Jul 2021 • Jiuqi, Zhang, Di wu, Benoit Boulet
With the rapid increase in the integration of renewable energy generation and the wide adoption of various electric appliances, power grids are now faced with more and more challenges.
1 code implementation • 9 Jul 2021 • Di wu, Rehmat Ullah, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese
Further, FedAdapt adopts reinforcement learning based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth.
1 code implementation • 5 Jul 2021 • Yipeng Zhou, Xuezheng Liu, Yao Fu, Di wu, Chao Li, Shui Yu
In this work, we study a crucial question which has been vastly overlooked by existing works: what are the optimal numbers of queries and replies in FL with DP so that the final model accuracy is maximized.
1 code implementation • 30 Jun 2021 • Di wu, Siyuan Li, Zelin Zang, Stan Z. Li
Self-supervised contrastive learning has demonstrated great potential in learning visual representations.
Ranked #18 on
Fine-Grained Image Classification
on NABirds
no code implementations • 28 Jun 2021 • Huiliang Zhang, Sayani Seal, Di wu, Benoit Boulet, Francois Bouffard, Geza Joos
Building energy management is one of the core problems in modern power grids to reduce energy consumption while ensuring occupants' comfort.
no code implementations • 10 Jun 2021 • Di wu, BinBin Zhang, Chao Yang, Zhendong Peng, Wenjing Xia, Xiaoyu Chen, Xin Lei
On the experiment of AISHELL-1, we achieve a 4. 63\% character error rate (CER) with a non-streaming setup and 5. 05\% with a streaming setup with 320ms latency by U2++.
1 code implementation • 27 Apr 2021 • Zelin Zang, Siyuan Li, Di wu, Jianzhu Guo, Yongjie Xu, Stan Z. Li
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space.
Ranked #2 on
Node Clustering
on Wiki
no code implementations • 22 Apr 2021 • Di wu, XiaoFeng Xie, Xiang Ni, Bin Fu, Hanhui Deng, Haibo Zeng, Zhijin Qin
We further present an experiment on data anomaly detection in this architecture, and the comparison between two architectures for ECG diagnosis.
no code implementations • 13 Apr 2021 • Di wu, Yiren Chen, Liang Ding, DaCheng Tao
Spoken language understanding (SLU) system usually consists of various pipeline components, where each component heavily relies on the results of its upstream ones.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+7
no code implementations • 29 Mar 2021 • Valentina Popescu, Abhinav Venigalla, Di wu, Robert Schreiber
While neural networks have been trained using IEEE-754 binary32 arithmetic, the rapid growth of computational demands in deep learning has boosted interest in faster, low precision training.
2 code implementations • 24 Mar 2021 • Zicheng Liu, Siyuan Li, Di wu, Zihan Liu, ZhiYuan Chen, Lirong Wu, Stan Z. Li
Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i. e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework.
Ranked #8 on
Image Classification
on Places205
no code implementations • 11 Mar 2021 • Miao Hu, Xianzhuo Luo, Jiawen Chen, Young Choon Lee, Yipeng Zhou, Di wu
Virtual Reality (VR) has shown great potential to revolutionize the market by providing users immersive experiences with freedom of movement.
Networking and Internet Architecture
1 code implementation • 7 Mar 2021 • Linghan Meng, Yanhui Li, Lin Chen, Zhi Wang, Di wu, Yuming Zhou, Baowen Xu
To tackle this problem, we propose Sample Discrimination based Selection (SDS) to select efficient samples that could discriminate multiple models, i. e., the prediction behaviors (right/wrong) of these samples would be helpful to indicate the trend of model performance.
no code implementations • 19 Feb 2021 • Di wu, Yong Zeng, Shi Jin, Rui Zhang
Two instances of CKM are proposed for beam alignment in mmWave massive MIMO systems, namely channel path map (CPM) and beam index map (BIM).
no code implementations • IWSLT (ACL) 2022 • Di wu, Liang Ding, Shuo Yang, Mingyang Li
Recently, the performance of the neural word alignment models has exceeded that of statistical models.
3 code implementations • 2 Feb 2021 • Zhuoyuan Yao, Di wu, Xiong Wang, BinBin Zhang, Fan Yu, Chao Yang, Zhendong Peng, Xiaoyu Chen, Lei Xie, Xin Lei
In this paper, we propose an open source, production first, and production ready speech recognition toolkit called WeNet in which a new two-pass approach is implemented to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model.
no code implementations • 11 Jan 2021 • Yao Fu, Yipeng Zhou, Di wu, Shui Yu, Yonggang Wen, Chao Li
Then, we theoretically derive: 1) the conditions for the DP based FedAvg to converge as the number of global iterations (GI) approaches infinity; 2) the method to set the number of local iterations (LI) to minimize the negative influence of DP noises.
no code implementations • 1 Jan 2021 • Di wu, Liang Ding, Shuo Yang, DaCheng Tao
Recently, the performance of the neural word alignment models has exceeded that of statistical models.
4 code implementations • 10 Dec 2020 • BinBin Zhang, Di wu, Zhuoyuan Yao, Xiong Wang, Fan Yu, Chao Yang, Liyong Guo, Yaguang Hu, Lei Xie, Xin Lei
In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model.
Ranked #5 on
Speech Recognition
on AISHELL-1
1 code implementation • COLING 2020 • Liang Ding, Longyue Wang, Di wu, DaCheng Tao, Zhaopeng Tu
Non-autoregressive translation (NAT) significantly accelerates the inference process by predicting the entire target sequence.
no code implementations • International Conference on Security and Privacy in Digital Economy 2020 • Ying Zhao, Junjun Chen, Qianling Guo, Jian Teng, Di wu
In the second learning stage, Ot uses the transfer learning method to reconstruct and re-train the model to further improve the detection performance on the specific task.
no code implementations • 19 Oct 2020 • Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou
Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.
Distributed, Parallel, and Cluster Computing
2 code implementations • 17 Oct 2020 • Boyuan Ma, Xiang Yin, Di wu, Xiaojuan Ban
In this work, to handle the requirements of both output image quality and comprehensive simplicity of structure implementation, we propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
1 code implementation • EMNLP 2020 • Di wu, Liang Ding, Fan Lu, Jian Xie
Slot filling and intent detection are two main tasks in spoken language understanding (SLU) system.
no code implementations • 22 Sep 2020 • Shuai Yu, Xu Chen, Zhi Zhou, Xiaowen Gong, Di wu
Ultra-dense edge computing (UDEC) has great potential, especially in the 5G era, but it still faces challenges in its current solutions, such as the lack of: i) efficient utilization of multiple 5G resources (e. g., computation, communication, storage and service resources); ii) low overhead offloading decision making and resource allocation strategies; and iii) privacy and security protection schemes.
no code implementations • 16 Sep 2020 • Zhi Wang, Chaoge Liu, Xiang Cui, Jie Yin, Jiaxi Liu, Di wu, Qixu Liu
The defender can limit the attacker once it is exposed.
1 code implementation • 30 Jun 2020 • Di Wu, Qi Tang, Yongle Zhao, Ming Zhang, Ying Fu, Debing Zhang
The 8 bits quantization has been widely applied to accelerate network inference in various deep learning applications.
no code implementations • 4 May 2020 • Ning Gao, Yong Zeng, Jian Wang, Di wu, Chaoyue Zhang, Qingheng Song, Jiachen Qian, Shi Jin
In this paper, via extensive flight experiments, we aim to firstly validate the recently derived theoretical energy model for rotary-wing UAVs, and then develop a general model for those complicated flight scenarios where rigorous theoretical model derivation is quite challenging, if not impossible.
no code implementations • 17 Mar 2020 • Di Wu, Yihao Chen, Xianbiao Qi, Yongjian Yu, Weixuan Chen, Rong Xiao
We utilise the overlay between the accurate mask prediction and less accurate mesh prediction to iteratively optimise the direct regressed 6D pose information with a focus on translation estimation.
no code implementations • 6 Feb 2020 • Di Wu, Huayan Wan, Siping Liu, Weiren Yu, Zhanpeng Jin, Dakuo Wang
The "mind-controlling" capability has always been in mankind's fantasy.
no code implementations • 29 Dec 2019 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
DeepRelations shows superior interpretability to the state-of-the-art: without compromising affinity prediction, it boosts the AUPRC of contact prediction 9. 5, 16. 9, 19. 3 and 5. 7-fold for the test, compound-unique, protein-unique, and both-unique sets, respectively.
no code implementations • 23 Nov 2019 • Di Wu, Chao Wang, Yong Wu, De-Shuang Huang
Besides, most of the multi-scale models embedding the multi-scale feature learning block into the feature extraction deep network, which reduces the efficiency of inference network.
1 code implementation • The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019 2019 • Di Wu, Zhaoyong Zhuang, Canqun Xiang, Wenbin Zou and Xia Li
We present a conceptually simple framework for 6DoF object pose estimation, especially for autonomous driving scenario.
no code implementations • 5 Jun 2019 • Min Chen, Ping Zhou, Di wu, Long Hu, Mohammad Mehedi Hassan, Atif Alamri
First, the wide collection of data in the close-loop information flow of user and remote medical data center is discussed.
no code implementations • 13 Dec 2018 • Di Wu, Hong-Wei Yang, De-Shuang Huang
Most of them focus on learning the part feature representation of person body in horizontal direction.
no code implementations • EMNLP 2018 • Yufeng Diao, Hongfei Lin, Di wu, Liang Yang, Kan Xu, Zhihao Yang, Jian Wang, Shaowu Zhang, Bo Xu, Dongyu Zhang
In this work, we first use WordNet to understand and expand word embedding for settling the polysemy of homographic puns, and then propose a WordNet-Encoded Collocation-Attention network model (WECA) which combined with the context weights for recognizing the puns.
no code implementations • 26 Sep 2018 • Di Wu, Kun Zhang, Fei Cheng, Yang Zhao, Qi Liu, Chang-An Yuan, De-Shuang Huang
As a basic task of multi-camera surveillance system, person re-identification aims to re-identify a query pedestrian observed from non-overlapping multiple cameras or across different time with a single camera.
no code implementations • 16 Sep 2018 • Zhishuai Han, Xiaojuan Ban, Xiaokun Wang, Di wu
Dynamic hand tracking and gesture recognition is a hard task since there are many joints on the fingers and each joint owns many degrees of freedom.
Human-Computer Interaction
no code implementations • 10 Sep 2018 • Di Wu, Cheng Chen, Xun Yang, Xiujun Chen, Qing Tan, Jian Xu, Kun Gai
With this formulation, we derive the optimal impression allocation strategy by solving the optimal bidding functions for contracts.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
2 code implementations • 20 Jun 2018 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
Motivation: Drug discovery demands rapid quantification of compound-protein interaction (CPI).
Ranked #2 on
Drug Discovery
on BindingDB IC50
no code implementations • 23 Feb 2018 • Di Wu, Xiujun Chen, Xun Yang, Hao Wang, Qing Tan, Xiaoxun Zhang, Jian Xu, Kun Gai
Our analysis shows that the immediate reward from environment is misleading under a critical resource constraint.
no code implementations • 31 Dec 2017 • Zhijian Liu, Di wu, Hongyu Wei, Guoqing Cao
It is indicated that the theories and applications of machine learning method in the field of energy conservation and indoor environment are not mature, due to the difficulty of the determination for model structure with better prediction.
1 code implementation • 2 Aug 2017 • Di Wu, Wenbin Zou, Xia Li, Yong Zhao
Visual tracking is intrinsically a temporal problem.
no code implementations • Neurocomputing 2017 • Di wu, Mingsheng Shang, Xin Luo a, Ji Xu, Huyong Yan, Weihui Deng, Guoyin Wang
Having a multitude of unlabeled data and few labeled ones is a common problem in many practical ap- plications.
no code implementations • CVPR 2014 • Di Wu, Ling Shao
Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data.