no code implementations • 23 Feb 2018 • Di Wu, Xiujun Chen, Xun Yang, Hao Wang, Qing Tan, Xiaoxun Zhang, Jian Xu, Kun Gai
Our analysis shows that the immediate reward from environment is misleading under a critical resource constraint.
no code implementations • 31 Dec 2017 • Zhijian Liu, Di wu, Hongyu Wei, Guoqing Cao
It is indicated that the theories and applications of machine learning method in the field of energy conservation and indoor environment are not mature, due to the difficulty of the determination for model structure with better prediction.
no code implementations • 10 Sep 2018 • Di Wu, Cheng Chen, Xun Yang, Xiujun Chen, Qing Tan, Jian Xu, Kun Gai
With this formulation, we derive the optimal impression allocation strategy by solving the optimal bidding functions for contracts.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 26 Sep 2018 • Di Wu, Kun Zhang, Fei Cheng, Yang Zhao, Qi Liu, Chang-An Yuan, De-Shuang Huang
As a basic task of multi-camera surveillance system, person re-identification aims to re-identify a query pedestrian observed from non-overlapping multiple cameras or across different time with a single camera.
no code implementations • 13 Dec 2018 • Di Wu, Hong-Wei Yang, De-Shuang Huang
Most of them focus on learning the part feature representation of person body in horizontal direction.
no code implementations • EMNLP 2018 • Yufeng Diao, Hongfei Lin, Di wu, Liang Yang, Kan Xu, Zhihao Yang, Jian Wang, Shaowu Zhang, Bo Xu, Dongyu Zhang
In this work, we first use WordNet to understand and expand word embedding for settling the polysemy of homographic puns, and then propose a WordNet-Encoded Collocation-Attention network model (WECA) which combined with the context weights for recognizing the puns.
no code implementations • 16 Sep 2018 • Zhishuai Han, Xiaojuan Ban, Xiaokun Wang, Di wu
Dynamic hand tracking and gesture recognition is a hard task since there are many joints on the fingers and each joint owns many degrees of freedom.
Human-Computer Interaction
no code implementations • CVPR 2014 • Di Wu, Ling Shao
Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data.
no code implementations • 5 Jun 2019 • Min Chen, Ping Zhou, Di wu, Long Hu, Mohammad Mehedi Hassan, Atif Alamri
First, the wide collection of data in the close-loop information flow of user and remote medical data center is discussed.
no code implementations • 23 Nov 2019 • Di Wu, Chao Wang, Yong Wu, De-Shuang Huang
Besides, most of the multi-scale models embedding the multi-scale feature learning block into the feature extraction deep network, which reduces the efficiency of inference network.
no code implementations • 29 Dec 2019 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
DeepRelations shows superior interpretability to the state-of-the-art: without compromising affinity prediction, it boosts the AUPRC of contact prediction 9. 5, 16. 9, 19. 3 and 5. 7-fold for the test, compound-unique, protein-unique, and both-unique sets, respectively.
no code implementations • 6 Feb 2020 • Di Wu, Huayan Wan, Siping Liu, Weiren Yu, Zhanpeng Jin, Dakuo Wang
The "mind-controlling" capability has always been in mankind's fantasy.
no code implementations • 17 Mar 2020 • Di Wu, Yihao Chen, Xianbiao Qi, Yongjian Yu, Weixuan Chen, Rong Xiao
We utilise the overlay between the accurate mask prediction and less accurate mesh prediction to iteratively optimise the direct regressed 6D pose information with a focus on translation estimation.
no code implementations • 16 Sep 2020 • Zhi Wang, Chaoge Liu, Xiang Cui, Jie Yin, Jiaxi Liu, Di wu, Qixu Liu
The defender can limit the attacker once it is exposed.
no code implementations • 22 Sep 2020 • Shuai Yu, Xu Chen, Zhi Zhou, Xiaowen Gong, Di wu
Ultra-dense edge computing (UDEC) has great potential, especially in the 5G era, but it still faces challenges in its current solutions, such as the lack of: i) efficient utilization of multiple 5G resources (e. g., computation, communication, storage and service resources); ii) low overhead offloading decision making and resource allocation strategies; and iii) privacy and security protection schemes.
no code implementations • 1 Jan 2021 • Di wu, Liang Ding, Shuo Yang, DaCheng Tao
Recently, the performance of the neural word alignment models has exceeded that of statistical models.
no code implementations • 19 Oct 2020 • Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou
Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.
Distributed, Parallel, and Cluster Computing
no code implementations • 11 Jan 2021 • Yao Fu, Yipeng Zhou, Di wu, Shui Yu, Yonggang Wen, Chao Li
Then, we theoretically derive: 1) the conditions for the DP based FedAvg to converge as the number of global iterations (GI) approaches infinity; 2) the method to set the number of local iterations (LI) to minimize the negative influence of DP noises.
no code implementations • IWSLT (ACL) 2022 • Di wu, Liang Ding, Shuo Yang, Mingyang Li
Recently, the performance of the neural word alignment models has exceeded that of statistical models.
no code implementations • 11 Mar 2021 • Miao Hu, Xianzhuo Luo, Jiawen Chen, Young Choon Lee, Yipeng Zhou, Di wu
Virtual Reality (VR) has shown great potential to revolutionize the market by providing users immersive experiences with freedom of movement.
Networking and Internet Architecture
no code implementations • 29 Mar 2021 • Valentina Popescu, Abhinav Venigalla, Di wu, Robert Schreiber
While neural networks have been trained using IEEE-754 binary32 arithmetic, the rapid growth of computational demands in deep learning has boosted interest in faster, low precision training.
no code implementations • 13 Apr 2021 • Di wu, Yiren Chen, Liang Ding, DaCheng Tao
Spoken language understanding (SLU) system usually consists of various pipeline components, where each component heavily relies on the results of its upstream ones.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +7
no code implementations • 19 Feb 2021 • Di wu, Yong Zeng, Shi Jin, Rui Zhang
Two instances of CKM are proposed for beam alignment in mmWave massive MIMO systems, namely channel path map (CPM) and beam index map (BIM).
no code implementations • 4 May 2020 • Ning Gao, Yong Zeng, Jian Wang, Di wu, Chaoyue Zhang, Qingheng Song, Jiachen Qian, Shi Jin
In this paper, via extensive flight experiments, we aim to firstly validate the recently derived theoretical energy model for rotary-wing UAVs, and then develop a general model for those complicated flight scenarios where rigorous theoretical model derivation is quite challenging, if not impossible.
no code implementations • 22 Apr 2021 • Di wu, XiaoFeng Xie, Xiang Ni, Bin Fu, Hanhui Deng, Haibo Zeng, Zhijin Qin
We further present an experiment on data anomaly detection in this architecture, and the comparison between two architectures for ECG diagnosis.
no code implementations • 10 Jun 2021 • Di wu, BinBin Zhang, Chao Yang, Zhendong Peng, Wenjing Xia, Xiaoyu Chen, Xin Lei
On the experiment of AISHELL-1, we achieve a 4. 63\% character error rate (CER) with a non-streaming setup and 5. 05\% with a streaming setup with 320ms latency by U2++.
no code implementations • 28 Jun 2021 • Huiliang Zhang, Sayani Seal, Di wu, Benoit Boulet, Francois Bouffard, Geza Joos
Building energy management is one of the core problems in modern power grids to reduce energy consumption while ensuring occupants' comfort.
no code implementations • 16 Jul 2021 • Jiuqi, Zhang, Di wu, Benoit Boulet
With the rapid increase in the integration of renewable energy generation and the wide adoption of various electric appliances, power grids are now faced with more and more challenges.
no code implementations • 24 Jul 2021 • Liang Ding, Di wu, DaCheng Tao
Our constrained system is based on a pipeline framework, i. e. ASR and NMT.
no code implementations • EMNLP 2021 • Liang Ding, Di wu, DaCheng Tao
We present a simple and effective pretraining strategy -- bidirectional training (BiT) for neural machine translation.
no code implementations • 29 Sep 2021 • Siyuan Li, Zicheng Liu, Di wu, Stan Z. Li
In this paper, we decompose mixup into two sub-tasks of mixup generation and classification and formulate it for discriminative representations as class- and instance-level mixup.
no code implementations • 29 Sep 2021 • Yuwei Fu, Di wu, Benoit Boulet
Through extensive experiments on the standard batch RL datasets, we find that non-uniform sampling is also effective in batch RL settings.
no code implementations • 29 Sep 2021 • Di wu, Tianyu Li, David Meger, Michael Jenkin, Xue Liu, Gregory Dudek
Unfortunately, most online reinforcement learning algorithms require a large number of interactions with the environment to learn a reliable control policy.
no code implementations • 26 Oct 2021 • Di wu, Yi Shi, Ziyu Wang, Jie Yang, Mohamad Sawan
Although compressive sensing (CS) can be adopted to compress the signals to reduce communication bandwidth requirement, it needs a complex reconstruction procedure before the signal can be used for seizure prediction.
no code implementations • 15 Nov 2021 • Xingshuai Huang, Di wu, Michael Jenkin, Benoit Boulet
Traffic signal control is of critical importance for the effective use of transportation infrastructures.
no code implementations • 31 Dec 2021 • Xuehui Yu, Di wu, Qixiang Ye, Jianbin Jiao, Zhenjun Han
As a result, we propose a point self-refinement approach that iteratively updates point annotations in a self-paced way.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Ce Yang, Weihao Gao, Di wu, Chong Wang
Simulation of the dynamics of physical systems is essential to the development of both science and engineering.
no code implementations • 25 Feb 2022 • Di wu, Jie Yang, Mohamad Sawan
The proposed training scheme significantly improves the performance of patient-specific seizure predictors and bridges the gap between patient-specific and patient-independent predictors.
no code implementations • International Conference on Security and Privacy in Digital Economy 2020 • Ying Zhao, Junjun Chen, Qianling Guo, Jian Teng, Di wu
In the second learning stage, Ot uses the transfer learning method to reconstruct and re-train the model to further improve the detection performance on the specific task.
no code implementations • 11 Mar 2022 • Di wu, Cheng Chen, Xiujun Chen, Junwei Pan, Xun Yang, Qing Tan, Jian Xu, Kuang-Chih Lee
In order to address the unstable traffic pattern challenge and achieve the optimal overall outcome, we propose a multi-agent reinforcement learning method to adjust the bids from each guaranteed contract, which is simple, converging efficiently and scalable.
no code implementations • Neurocomputing 2017 • Di wu, Mingsheng Shang, Xin Luo a, Ji Xu, Huyong Yan, Weihui Deng, Guoyin Wang
Having a multitude of unlabeled data and few labeled ones is a common problem in many practical ap- plications.
no code implementations • 25 Mar 2022 • Tao Fu, Huifen Zhou, Xu Ma, Z. Jason Hou, Di wu
In this study, we develop a supervised machine learning approach to generate 1) the probability of the next operation day containing the peak hour of the month and 2) the probability of an hour to be the peak hour of the day.
no code implementations • 2 Apr 2022 • Jia Chen, Di wu, Xin Luo
High-dimensional and sparse (HiDS) matrices are frequently adopted to describe the complex relationships in various big data-related systems and applications.
no code implementations • 16 Apr 2022 • Di wu, Yi He, Xin Luo
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system.
no code implementations • 16 Apr 2022 • Di wu, Peng Zhang, Yi He, Xin Luo
High-dimensional and sparse (HiDS) matrices are omnipresent in a variety of big data-related applications.
no code implementations • 27 Apr 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
To address this problem, many transfer learning based EFD methods utilize historical data to learn transferable domain knowledge and conduct early fault detection on new target bearings.
no code implementations • 1 May 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
One of the key points of EFD is developing a generic model to extract robust and discriminative features from different equipment for early fault detection.
no code implementations • 24 May 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
In this work, we propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL.
no code implementations • 26 May 2022 • Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg
To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks.
no code implementations • 6 Jun 2022 • Jiajia Zhou, Junbin Zhuang, Yan Zheng, Di wu
As this network make "Haar Images into Fusion Images", it is called HIFI-Net.
no code implementations • 14 Jul 2022 • Ningkun Zheng, Xin Qin, Di wu, Gabe Murtaugh, Bolun Xu
Combined with an optimal bidding design algorithm using dynamic programming, our paper shows that the SoC segment market model provides more accurate representations of the opportunity costs of energy storage compared to existing power-based bidding models.
no code implementations • 2 Aug 2022 • Feilong Chen, Di wu, Jie Yang, Yi He
In many real applications such as intelligent healthcare platform, streaming feature always has some missing data, which raises a crucial challenge in conducting OSFS, i. e., how to establish the uncertain relationship between sparse streaming features and labels.
no code implementations • 2 Aug 2022 • Xin Cheng, Feng Shu, YiFan Li, Zhihong Zhuang, Di wu, Jiangzhou Wang
In this paper, optimal geometrical configurations of UAVs in received signal strength (RSS)-based localization under region constraints are investigated.
no code implementations • 13 Aug 2022 • Yuanyi Liu, Jia Chen, Di wu
The A2BAS algorithm consists of two sub-algorithms.
no code implementations • 16 Aug 2022 • Yuting Ding, Di wu
In the past decade, scholars have done many research on the recovery of missing traffic data, however how to make full use of spatio-temporal traffic patterns to improve the recovery performance is still an open problem.
no code implementations • AAAI Conference on Artificial Intelligence 2022 • Yuwei Fu, Di wu, Benoit Boulet
To deal with this challenge, we propose a reinforcement learning (RL) based model combination (RLMC) framework for determining model weights in an ensemble for time series forecasting tasks.
no code implementations • 23 Aug 2022 • Qi Guo, Yong Qi, Saiyu Qi, Di wu, Qian Li
Federated learning (FL) facilitates multiple clients to jointly train a machine learning model without sharing their private data.
no code implementations • 19 Sep 2022 • Hyeonjin Kim, Kai Ye, Han Pyo Lee, Rongxing Hu, Ning Lu, Di wu, PJ Rehm
The residual load profiles are processed using ICA for HVAC load extraction.
no code implementations • 12 Oct 2022 • Wenjian Hao, Bowen Huang, Wei Pan, Di wu, Shaoshuai Mou
This paper presents a data-driven approach to approximate the dynamics of a nonlinear time-varying system (NTVS) by a linear time-varying system (LTVS), which is resulted from the Koopman operator and deep neural networks.
no code implementations • 3 Oct 2022 • Di wu, Jie Yang, Mohamad Sawan
In this survey, we assess the eligibility of more than fifty published peer-reviewed representative transfer learning approaches for EMG applications.
no code implementations • 23 Oct 2022 • Huiliang Zhang, Di wu, Benoit Boulet
The building sector has been recognized as one of the primary sectors for worldwide energy consumption.
no code implementations • 31 Oct 2022 • Xingchen Song, Di wu, BinBin Zhang, Zhiyong Wu, Wenpeng Li, Dongfang Li, Pengshen Zhang, Zhendong Peng, Fuping Pan, Changbao Zhu, Zhongqin Wu
Therefore, we name it FusionFormer.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 7 Nov 2022 • Han Pyo Lee, Yiyan Li, Lidong Song, Di wu, Ning Lu
In contrast to many existing methods, we treat CVR baseline estimation as a missing data retrieval problem.
no code implementations • 16 Nov 2022 • Qi Guo, Yong Qi, Saiyu Qi, Di wu
To our knowledge, we are the first to present an FSSL method that utilizes only 10\% labeled clients, while still achieving superior performance compared to standard federated supervised learning, which uses all clients with labeled data.
no code implementations • 29 Nov 2022 • Yiyan Li, Lidong Song, Yi Hu, Hanpyo Lee, Di wu, PJ Rehm, Ning Lu
We propose a Generator structure consisting of a coarse network and a fine-tuning network.
no code implementations • 9 Dec 2022 • Kai Ye, Hyeonjin Kim, Yi Hu, Ning Lu, Di wu, PJ Rehm
This paper presents a modified sequence-to-point (S2P) algorithm for disaggregating the heat, ventilation, and air conditioning (HVAC) load from the total building electricity consumption.
no code implementations • CVPR 2023 • Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu
UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.
no code implementations • 16 Dec 2022 • Rongxing Hu, Kai Ye, Hyeonjin Kim, Hanpyo Lee, Ning Lu, Di wu, PJ Rehm
This paper presents a coordinative demand charge mitigation (DCM) strategy for reducing electricity consumption during system peak periods.
no code implementations • 20 Dec 2022 • Baopu Qiu, Liang Ding, Di wu, Lin Shang, Yibing Zhan, DaCheng Tao
Machine Translation Quality Estimation (QE) is the task of evaluating translation output in the absence of human-written references.
no code implementations • 20 Dec 2022 • Cheng Liang, Teng Huang, Yi He, Song Deng, Di wu, Xin Luo
The idea of the proposed MMA is mainly two-fold: 1) apply different $L_p$-norm on loss function and regularization to form different variant models in different metric spaces, and 2) aggregate these variant models.
no code implementations • 12 Jan 2023 • Amir Farakhor, Di wu, Yebin Wang, Huazhen Fang
An optimal power management approach is developed to extensively exploit the merits of the proposed design.
no code implementations • 3 Feb 2023 • Igor Kozlov, Dmitriy Rivkin, Wei-Di Chang, Di wu, Xue Liu, Gregory Dudek
Such networks undergo frequent and often heterogeneous changes caused by network operators, who are seeking to tune their system parameters for optimal performance.
no code implementations • 7 Feb 2023 • Huiliang Zhang, Di wu, Benoit Boulet
Safety has been recognized as the central obstacle to preventing the use of reinforcement learning (RL) for real-world applications.
no code implementations • 25 Feb 2023 • Ruiyang Xu, Di wu, Xin Luo
Traditional feature selections need to know the feature space before learning, and online streaming feature selection (OSFS) is proposed to process streaming features on the fly.
no code implementations • 11 Mar 2023 • Xijuan Sun, Di wu, Arnaud Zinflou, Benoit Boulet
Usually, machine learning-based methods need to model the normal data distribution.
no code implementations • 14 Mar 2023 • Jikun Kang, Di wu, Ju Wang, Ekram Hossain, Xue Liu, Gregory Dudek
In cellular networks, User Equipment (UE) handoff from one Base Station (BS) to another, giving rise to the load balancing problem among the BSs.
no code implementations • 22 Mar 2023 • Borui Cai, Yong Xiang, Longxiang Gao, Di wu, He Zhang, Jiong Jin, Tom Luan
To seek a simple strategy to improve the parameter efficiency of conventional KGE models, we take inspiration from that deeper neural networks require exponentially fewer parameters to achieve expressiveness comparable to wider networks for compositional structures.
no code implementations • 25 Mar 2023 • Miao Hu, Zhenxiao Luo, Amirmohammad Pasdar, Young Choon Lee, Yipeng Zhou, Di wu
Edge computing has been getting a momentum with ever-increasing data at the edge of the network.
no code implementations • 22 Mar 2023 • Yi Tian Xu, Jimmy Li, Di wu, Michael Jenkin, Seowoo Jang, Xue Liu, Gregory Dudek
When deploying to an unknown traffic scenario, we select a policy from the policy bank based on the similarity between the previous-day traffic of the current scenario and the traffic observed during training.
no code implementations • 22 Mar 2023 • Abhisek Konar, Di wu, Yi Tian Xu, Seowoo Jang, Steve Liu, Gregory Dudek
Engineering this reward function is challenging, because it involves the need for expert knowledge and there lacks a general consensus on the form of an optimal reward function.
no code implementations • 9 May 2023 • Yunchao Yang, Yipeng Zhou, Miao Hu, Di wu, Quan Z. Sheng
The challenge of this problem lies in the opaque feedback between reward budget allocation and model utility improvement of FL, making the optimal reward budget allocation complicated.
no code implementations • CVPR 2023 • Xinglin Li, Jiajing Chen, Jinhui Ouyang, Hanhui Deng, Senem Velipasalar, Di wu
Recent years have witnessed significant developments in point cloud processing, including classification and segmentation.
no code implementations • 18 May 2023 • Xingchen Song, Di wu, BinBin Zhang, Zhendong Peng, Bo Dang, Fuping Pan, Zhiyong Wu
In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding Prompt-and-Refine strategy (Figure 3), two simple but effective \textbf{training-free} methods to decrease the Token Display Time (TDT) of streaming ASR models \textbf{without any accuracy loss}.
no code implementations • 22 May 2023 • Chu-Kuan Jiang, Yang-Fan Deng, Hongxiao Guo, Guang-Hao Chen, Di wu
Typical pretreated wastewater was synthesized with chemical oxygen demand of 110 mg/L, sulfate of 50 mg S/L, and varying dissolved oxygen (DO) and was fed into a moving-bed biofilm reactor (MBBR).
no code implementations • 31 May 2023 • Yan Wang, Feng Shu, Zhihong Zhuang, Rongen Dong, Qi Zhang, Di wu, Liang Yang, Jiangzhou Wang
Numerical simulation results show that a 3-bit discrete phase shifter is required to achieve a trivial performance loss for a large-scale active IRS.
no code implementations • 19 Jun 2023 • Liping Zhang, Di wu, Xin Luo
Then, based on the idea of stacking ensemble, long short-term memory is employed as an error correction module to forecast the components separately, and the forecast results are treated as new features to be fed into extreme gradient boosting for the second-step forecasting.
no code implementations • 18 Jul 2023 • Xingyue Ma, Hongying Chen, Ri He, Zhanbo Yu, Sergei Prokhorenko, Zheng Wen, Zhicheng Zhong, Jorge Iñiguez, L. Bellaiche, Di wu, Yurong Yang
However, the parametrization method of the effective Hamiltonian is complicated and hardly can resolve the systems with complex interactions and/or complex components.
no code implementations • 2 Aug 2023 • Yuzhu Li, Nir Pillar, Jingxi Li, Tairan Liu, Di wu, Songyu Sun, Guangdong Ma, Kevin De Haan, Luzhe Huang, Sepehr Hamidi, Anatoly Urisman, Tal Keidar Haran, William Dean Wallace, Jonathan E. Zuckerman, Aydogan Ozcan
Histological examination is a crucial step in an autopsy; however, the traditional histochemical staining of post-mortem samples faces multiple challenges, including the inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, as well as the resource-intensive nature of chemical staining procedures covering large tissue areas, which demand substantial labor, cost, and time.
no code implementations • 18 Sep 2023 • Renhe Chen, Albert Lee, ZiRui Wang, Di wu, Xufeng Kou
This brief introduces a read bias circuit to improve readout yield of magnetic random access memories (MRAMs).
no code implementations • 15 Oct 2023 • Di wu, Shaomu Tan, David Stap, Ali Araabi, Christof Monz
This paper describes the UvA-MT's submission to the WMT 2023 shared task on general machine translation.
no code implementations • 5 Oct 2023 • Junliang Luo, Yi Tian Xu, Di wu, Michael Jenkin, Xue Liu, Gregory Dudek
In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics.
no code implementations • 19 Oct 2023 • Min Gyung Yu, Xu Ma, Bowen Huang, Karthik Devaprasad, Fredericka Brown, Di wu
The solution is determined considering both capital costs in optimal sizing and operational benefits in optimal dispatch.
no code implementations • 20 Oct 2023 • Xabi Azagirre, Akshay Balwally, Guillaume Candeli, Nicholas Chamandy, Benjamin Han, Alona King, Hyungjun Lee, Martin Loncaric, Sebastien Martin, Vijay Narasiman, Zhiwei, Qin, Baptiste Richard, Sara Smoot, Sean Taylor, Garrett van Ryzin, Di wu, Fei Yu, Alex Zamoshchin
This change was the first documented implementation of a ridesharing matching algorithm that can learn and improve in real time.
no code implementations • 24 Oct 2023 • Yifan Tang, M. Rahmani Dehaghani, Pouyan Sajadi, Shahriar Bakrani Balani, Akshay Dhalpe, Suraj Panicker, Di wu, Eric Coatanea, G. Gary Wang
With measured/predicted temperature profiles of several points on the same layer, the second stage proposes a reduced order model (ROM) (intra-layer prediction model) to decompose and construct the temperature profiles of all points on the same layer, which could be used to build the temperature field of the entire layer.
no code implementations • 25 Oct 2023 • Amir Farakhor, Di wu, Yebin Wang, Huazhen Fang
Since the number of clusters is much fewer than the number of cells, the proposed approach significantly reduces the computational costs, allowing optimal power management to scale up to large-scale BESS.
no code implementations • 27 Nov 2023 • Xinglin Li, Kun Wang, Hanhui Deng, Yuxuan Liang, Di wu
We seminally propose the concept of Shock Absorber (a type of perturbation) that enhances the robustness and stability of the original graphs against changes in an adversarial training fashion.
no code implementations • 6 Dec 2023 • Jimmy Li, Igor Kozlov, Di wu, Xue Liu, Gregory Dudek
This coincides with a rapid increase in the number of cell sites worldwide, driven largely by dramatic growth in cellular network traffic.
no code implementations • 12 Dec 2023 • Xingshuai Huang, Di wu, Benoit Boulet
In this work, we propose DTLight, a simple yet powerful lightweight Decision Transformer-based TSC method that can learn policy from easily accessible offline datasets.
no code implementations • 19 Dec 2023 • Di wu, Yuling Jiao, Li Shen, Haizhao Yang, Xiliang Lu
In this paper, we establish a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation with $\mathcal{C}$-mixing data regarding the structure of networks, the dimension of datasets, and the concentrability of data coverage, under mild assumptions.
no code implementations • 25 Dec 2023 • Xicong Shen, Yang Liu, Huiqi Liu, Jue Hong, Bing Duan, Zirui Huang, Yunlong Mao, Ye Wu, Di wu
Fine-tuning is a prominent technique to adapt a pre-trained language model to downstream scenarios.
no code implementations • 5 Jan 2024 • Osten Anderson, Nanpeng Yu, Konstantinos Oikonomou, Di wu
To this end, we propose a novel method for selecting representative periods of any length.
no code implementations • 16 Jan 2024 • Junliang Luo, Tianyu Li, Di wu, Michael Jenkin, Steve Liu, Gregory Dudek
Large language models (LLMs), including ChatGPT, Bard, and Llama, have achieved remarkable successes over the last two years in a range of different applications.
no code implementations • International Conference on Communication, Image and Signal Processing (CCISP) 2023 • Di wu, Zhihui Xin, Chao Zhang
Experiments show that the algorithm in this paper has better recovery in image edges as well as texture complex regions with higher PSNR and SSIM values and better subjective visual perception compared to the traditional gradient algorithms such as BI, Cok, Hibbard, Laroche, Hamiton, while the algorithm involves only the add-subtract and shift operations, which is suitable to be implemented on the hardware platform.
1 code implementation • 22 Jan 2024 • Di wu, Shaomu Tan, Yan Meng, David Stap, Christof Monz
Zero-shot translation aims to translate between language pairs not seen during training in Multilingual Machine Translation (MMT) and is largely considered an open problem.
no code implementations • 26 Jan 2024 • Sicong Cao, Xiaobing Sun, Ratnadira Widyasari, David Lo, Xiaoxue Wu, Lili Bo, Jiale Zhang, Bin Li, Wei Liu, Di wu, Yixin Chen
The remarkable achievements of Artificial Intelligence (AI) algorithms, particularly in Machine Learning (ML) and Deep Learning (DL), have fueled their extensive deployment across multiple sectors, including Software Engineering (SE).
no code implementations • 30 Jan 2024 • Panagiotis Pagonis, Kai Hartung, Di wu, Munir Georges, Sören Gröttrup
Knowledge Tracing (KT) aims to predict the future performance of students by tracking the development of their knowledge states.
no code implementations • 31 Jan 2024 • Zikai Feng, Di wu, Mengxing Huang, Chau Yuen
In this paper, a novel graph-attention multi-agent trust region (GA-MATR) reinforcement learning framework is proposed to solve the multi-UAV assisted communication problem.
no code implementations • 28 Feb 2024 • Juan Zhang, Jiahao Chen, Cheng Wang, Zhiwang Yu, Tangquan Qi, Di wu
Despite numerous completed studies, achieving high fidelity talking face generation with highly synchronized lip movements corresponding to arbitrary audio remains a significant challenge in the field.
no code implementations • 28 Feb 2024 • Yibin Lei, Di wu, Tianyi Zhou, Tao Shen, Yu Cao, Chongyang Tao, Andrew Yates
In this work, we introduce a new unsupervised embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning or task-specific engineering.
no code implementations • 4 Mar 2024 • Chao Zhang, Shiwei Wu, Haoxin Zhang, Tong Xu, Yan Gao, Yao Hu, Di wu, Enhong Chen
Indeed, learning to generate hashtags/categories can potentially enhance note embeddings, both of which compress key note information into limited content.
no code implementations • 7 Mar 2024 • Jialin Chen, Zhiqiang Cai, Ke Xu, Di wu, Wei Cao
Considering the noise level limit, one crucial aspect for quantum machine learning is to design a high-performing variational quantum circuit architecture with small number of quantum gates.
no code implementations • 13 Mar 2024 • Zhuoyin Dai, Di wu, Zhenjun Dong, Kun Li, Dingyang Ding, Sihan Wang, Yong Zeng
In this paper, to alleviate the large training overhead in millimeter wave (mmWave) beam alignment, an environment-aware and training-free beam alignment prototype is established based on a typical CKM, termed beam index map (BIM).
no code implementations • 15 Mar 2024 • Di wu, Wasi Uddin Ahmad, Dejiao Zhang, Murali Krishna Ramanathan, Xiaofei Ma
Recent advances in retrieval-augmented generation (RAG) have initiated a new era in repository-level code completion.
no code implementations • 14 Mar 2024 • Jinhui Ouyang, Mingzhu Wu, Xinglin Li, Hanhui Deng, Di wu
To better extract the joint features of heterogeneous EEG data as well as enhance classification accuracy, BRIEDGE introduces an informer-based ProbSparse self-attention mechanism.
no code implementations • 26 Mar 2024 • Youpeng Zhao, Di wu, Jun Wang
In a single GPU-CPU system, we demonstrate that under varying workloads, ALISA improves the throughput of baseline systems such as FlexGen and vLLM by up to 3X and 1. 9X, respectively.
no code implementations • 17 Apr 2024 • Shaomu Tan, Di wu, Christof Monz
Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference.
no code implementations • 18 Apr 2024 • Jilan Samiuddin, Benoit Boulet, Di wu
Among these modules, the trajectory planner plays a pivotal role in the safety of the vehicle and the comfort of its passengers.
no code implementations • 22 Apr 2024 • Yinlin Zhu, Xunkai Li, Zhengyu Wu, Di wu, Miao Hu, Rong-Hua Li
Subgraph federated learning (subgraph-FL) is a new distributed paradigm that facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs.
no code implementations • 25 Apr 2024 • Xingchen Song, Di wu, BinBin Zhang, Dinghao Zhou, Zhendong Peng, Bo Dang, Fuping Pan, Chao Yang
Scale has opened new frontiers in natural language processing, but at a high cost.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 27 Apr 2024 • Di wu, Shicai Fan, Xue Zhou, Li Yu, Yuzhong Deng, Jianxiao Zou, Baihong Lin
In MDPS, the problem of normal image reconstruction is mathematically modeled as multiple diffusion posterior sampling for normal images based on the devised masked noisy observation model and the diffusion-based normal image prior under Bayesian framework.
1 code implementation • COLING 2020 • Liang Ding, Longyue Wang, Di wu, DaCheng Tao, Zhaopeng Tu
Non-autoregressive translation (NAT) significantly accelerates the inference process by predicting the entire target sequence.
1 code implementation • 7 Mar 2021 • Linghan Meng, Yanhui Li, Lin Chen, Zhi Wang, Di wu, Yuming Zhou, Baowen Xu
To tackle this problem, we propose Sample Discrimination based Selection (SDS) to select efficient samples that could discriminate multiple models, i. e., the prediction behaviors (right/wrong) of these samples would be helpful to indicate the trend of model performance.
1 code implementation • 23 May 2023 • Di wu, Christof Monz
Using a vocabulary that is shared across languages is common practice in Multilingual Neural Machine Translation (MNMT).
1 code implementation • 13 Mar 2024 • Jianan Jiang, Xinglin Li, Weiren Yu, Di wu
Our method excels in preserving the distinctive style and intricate details essential for fashion design applications.
1 code implementation • 12 Oct 2023 • Jinye Yang, Ji Xu, Di wu, Jianhang Tang, Shaobo Li, Guoyin Wang
The deviation of a classification model is caused by both class-wise and attribute-wise imbalance.
1 code implementation • 5 Jul 2021 • Yipeng Zhou, Xuezheng Liu, Yao Fu, Di wu, Chao Li, Shui Yu
In this work, we study a crucial question which has been vastly overlooked by existing works: what are the optimal numbers of queries and replies in FL with DP so that the final model accuracy is maximized.
1 code implementation • 2 Aug 2017 • Di Wu, Wenbin Zou, Xia Li, Yong Zhao
Visual tracking is intrinsically a temporal problem.
1 code implementation • 27 Apr 2021 • Zelin Zang, Siyuan Li, Di wu, Jianzhu Guo, Yongjie Xu, Stan Z. Li
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space.
Ranked #2 on Node Clustering on Pubmed
1 code implementation • 8 Jan 2022 • Arec Jamgochian, Di wu, Kunal Menda, Soyeon Jung, Mykel J. Kochenderfer
In this paper, we introduce the conditional approximate normalizing flow (CANF) to make probabilistic multi-step time-series forecasts when correlations are present over long time horizons.
1 code implementation • 10 May 2023 • Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di wu
In this paper, we propose a new PFL algorithm called \emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address the above problem, which leverages the parameter server (PS) to compute personalized aggregation weights based on collected models from clients.
1 code implementation • 27 Mar 2023 • Di wu, Da Yin, Kai-Wei Chang
Despite the significant advancements in keyphrase extraction and keyphrase generation methods, the predominant approach for evaluation mainly relies on exact matching with human references.
1 code implementation • 5 Dec 2021 • Xingtai Gui, Di wu, Yang Chang, Shicai Fan
Anomaly detection aims to separate anomalies from normal samples, and the pretrained network is promising for anomaly detection.
Ranked #8 on Anomaly Detection on One-class CIFAR-10
2 code implementations • Findings (ACL) 2022 • Sen yang, Leyang Cui, Ruoxi Ning, Di wu, Yue Zhang
Neural constituency parsers have reached practical performance on news-domain benchmarks.
1 code implementation • 15 Mar 2022 • Di wu, Wasi Uddin Ahmad, Sunipa Dev, Kai-Wei Chang
State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data.
1 code implementation • 23 Jun 2023 • Amal Feriani, Di wu, Steve Liu, Greg Dudek
This work offers a comprehensive and unified framework to help researchers evaluate and design data-driven channel estimation algorithms.
1 code implementation • 1 Nov 2023 • Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
1 code implementation • 20 Apr 2022 • Di wu, Siyuan Li, Jie Yang, Mohamad Sawan
Extensive data labeling on neurophysiological signals is often prohibitively expensive or impractical, as it may require particular infrastructure or domain expertise.
1 code implementation • 2 Nov 2021 • Rehmat Ullah, Di wu, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese
Our empirical results on the CIFAR10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL.
1 code implementation • 19 May 2022 • Jiuqi Elise Zhang, Di wu, Benoit Boulet
Time series anomaly detection has been recognized as of critical importance for the reliable and efficient operation of real-world systems.
1 code implementation • 20 Dec 2022 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models.
1 code implementation • 10 Oct 2023 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
DeSel improves greedy search by an average of 4. 7% semantic F1 across five datasets.
1 code implementation • 21 Feb 2024 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
This study addresses the application of encoder-only Pre-trained Language Models (PLMs) in keyphrase generation (KPG) amidst the broader availability of domain-tailored encoder-only models compared to encoder-decoder models.
1 code implementation • 24 May 2023 • Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti
We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.
1 code implementation • EMNLP 2020 • Di wu, Liang Ding, Fan Lu, Jian Xie
Slot filling and intent detection are two main tasks in spoken language understanding (SLU) system.
1 code implementation • 9 Jul 2021 • Di wu, Rehmat Ullah, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese
Further, FedAdapt adopts reinforcement learning based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth.
1 code implementation • The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019 2019 • Di Wu, Zhaoyong Zhuang, Canqun Xiang, Wenbin Zou and Xia Li
We present a conceptually simple framework for 6DoF object pose estimation, especially for autonomous driving scenario.
2 code implementations • 17 Oct 2020 • Boyuan Ma, Xiang Yin, Di wu, Xiaojuan Ban
In this work, to handle the requirements of both output image quality and comprehensive simplicity of structure implementation, we propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
2 code implementations • 20 Jun 2018 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
Motivation: Drug discovery demands rapid quantification of compound-protein interaction (CPI).
Ranked #2 on Drug Discovery on BindingDB IC50
1 code implementation • 31 Dec 2023 • Siyuan Li, Luyuan Zhang, Zedong Wang, Di wu, Lirong Wu, Zicheng Liu, Jun Xia, Cheng Tan, Yang Liu, Baigui Sun, Stan Z. Li
As the deep learning revolution marches on, self-supervised learning has garnered increasing attention in recent years thanks to its remarkable representation learning ability and the low dependence on labeled data.
1 code implementation • 30 Jun 2020 • Di Wu, Qi Tang, Yongle Zhao, Ming Zhang, Ying Fu, Debing Zhang
The 8 bits quantization has been widely applied to accelerate network inference in various deep learning applications.
1 code implementation • 7 Oct 2021 • BinBin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, Di wu, Zhendong Peng
In this paper, we present WenetSpeech, a multi-domain Mandarin corpus consisting of 10000+ hours high-quality labeled speech, 2400+ hours weakly labeled speech, and about 10000 hours unlabeled speech, with 22400+ hours in total.
Ranked #5 on Speech Recognition on WenetSpeech
2 code implementations • 24 Mar 2021 • Zicheng Liu, Siyuan Li, Di wu, Zihan Liu, ZhiYuan Chen, Lirong Wu, Stan Z. Li
Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i. e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework.
Ranked #8 on Image Classification on Places205
1 code implementation • 30 Jun 2021 • Di wu, Siyuan Li, Zelin Zang, Stan Z. Li
Self-supervised contrastive learning has demonstrated great potential in learning visual representations.
Ranked #22 on Fine-Grained Image Classification on NABirds
1 code implementation • 27 Oct 2021 • Siyuan Li, Zicheng Liu, Zelin Zang, Di wu, ZhiYuan Chen, Stan Z. Li
For example, dimension reduction methods, t-SNE, and UMAP optimize pair-wise data relationships by preserving the global geometric structure, while self-supervised learning, SimCLR, and BYOL focus on mining the local statistics of instances under specific augmentations.
1 code implementation • 30 Nov 2021 • Siyuan Li, Zicheng Liu, Zedong Wang, Di wu, Zihan Liu, Stan Z. Li
Accordingly, we propose $\eta$-balanced mixup loss for complementary learning of the two sub-objectives.
Ranked #7 on Image Classification on Places205
2 code implementations • 7 Jul 2022 • Zelin Zang, Siyuan Li, Di wu, Ge Wang, Lei Shang, Baigui Sun, Hao Li, Stan Z. Li
To overcome the underconstrained embedding problem, we design a loss and theoretically demonstrate that it leads to a more suitable embedding based on the local flatness.
Ranked #2 on Image Classification on ImageNet-100
1 code implementation • 11 Sep 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Di wu, Cheng Tan, Weiyang Jin, Stan Z. Li
Data mixing, or mixup, is a data-dependent augmentation technique that has greatly enhanced the generalizability of modern deep neural networks.
2 code implementations • 14 Feb 2024 • Siyuan Li, Zicheng Liu, Juanxi Tian, Ge Wang, Zedong Wang, Weiyang Jin, Di wu, Cheng Tan, Tao Lin, Yang Liu, Baigui Sun, Stan Z. Li
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization.
6 code implementations • 7 Nov 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di wu, ZhiYuan Chen, Jiangbin Zheng, Stan Z. Li
Notably, MogaNet hits 80. 0\% and 87. 8\% accuracy with 5. 2M and 181M parameters on ImageNet-1K, outperforming ParC-Net and ConvNeXt-L, while saving 59\% FLOPs and 17M parameters, respectively.
Ranked #1 on Pose Estimation on COCO val2017
2 code implementations • CVPR 2022 • Xuehui Yu, Pengfei Chen, Di wu, Najmul Hassan, Guorong Li, Junchi Yan, Humphrey Shi, Qixiang Ye, Zhenjun Han
In this study, we propose a POL method using coarse point annotations, relaxing the supervision signals from accurate key points to freely spotted points.
1 code implementation • ICCV 2023 • Di wu, Pengfei Chen, Xuehui Yu, Guorong Li, Zhenjun Han, Jianbin Jiao
Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects).
1 code implementation • 25 Aug 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants.
3 code implementations • 27 May 2022 • Siyuan Li, Di wu, Fang Wu, Zelin Zang, Stan. Z. Li
We then propose an Architecture-Agnostic Masked Image Modeling framework (A$^2$MIM), which is compatible with both Transformers and CNNs in a unified way.
3 code implementations • 29 Mar 2022 • BinBin Zhang, Di wu, Zhendong Peng, Xingchen Song, Zhuoyuan Yao, Hang Lv, Lei Xie, Chao Yang, Fuping Pan, Jianwei Niu
Recently, we made available WeNet, a production-oriented end-to-end speech recognition toolkit, which introduces a unified two-pass (U2) framework and a built-in runtime to address the streaming and non-streaming decoding modes in a single model.
1 code implementation • 1 Nov 2022 • Xingchen Song, Di wu, Zhiyong Wu, BinBin Zhang, Yuekai Zhang, Zhendong Peng, Wenpeng Li, Fuping Pan, Changbao Zhu
In this paper, we present TrimTail, a simple but effective emission regularization method to improve the latency of streaming ASR models.
5 code implementations • 10 Dec 2020 • BinBin Zhang, Di wu, Zhuoyuan Yao, Xiong Wang, Fan Yu, Chao Yang, Liyong Guo, Yaguang Hu, Lei Xie, Xin Lei
In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model.
Ranked #6 on Speech Recognition on AISHELL-1
4 code implementations • 2 Feb 2021 • Zhuoyuan Yao, Di wu, Xiong Wang, BinBin Zhang, Fan Yu, Chao Yang, Zhendong Peng, Xiaoyu Chen, Lei Xie, Xin Lei
In this paper, we propose an open source, production first, and production ready speech recognition toolkit called WeNet in which a new two-pass approach is implemented to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model.