2 code implementations • Findings (ACL) 2022 • Sen yang, Leyang Cui, Ruoxi Ning, Di wu, Yue Zhang
Neural constituency parsers have reached practical performance on news-domain benchmarks.
no code implementations • 30 Oct 2024 • Fulai Yang, Di wu, Yi He, Li Tao, Xin Luo
However, existing approaches loosely consider these relationships and mechanisms by a non-end-to-end learning framework, resulting in sub-optimal feature extractions and fusions for CD.
1 code implementation • 29 Oct 2024 • Yuwei Fu, Haichao Zhang, Di wu, Wei Xu, Benoit Boulet
To address this issue, in this paper, we introduce the Temporal Optimal Transport (TemporalOT) reward to incorporate temporal order information for learning a more accurate OT-based proxy reward.
no code implementations • 28 Oct 2024 • Jun Bai, Yiliao Song, Di wu, Atul Sajjanhar, Yong Xiang, Wei Zhou, Xiaohui Tao, Yan Li
Specifically, a new stratified learning structure is proposed to cover data heterogeneity, and the value of each item during computation reflects model heterogeneity.
1 code implementation • 20 Oct 2024 • Yuankai Li, Jia-Chen Gu, Di wu, Kai-Wei Chang, Nanyun Peng
Based on our synthetic data built entirely by open-source models, BRIEF generates more concise summaries and enables a range of LLMs to achieve exceptional open-domain question answering (QA) performance.
no code implementations • 18 Oct 2024 • Hao Sui, Bing Chen, Jiale Zhang, Chengcheng Zhu, Di wu, Qinghua Lu, Guodong Long
Recent studies have revealed that GNNs are highly susceptible to multiple adversarial attacks.
no code implementations • 15 Oct 2024 • Jilan Samiuddin, Benoit Boulet, Di wu
Finally, an output network comprising a Multilayer Perceptron is used to predict the trajectories utilizing the decoded states as its inputs.
1 code implementation • 14 Oct 2024 • Di wu, Hongwei Wang, Wenhao Yu, Yuwei Zhang, Kai-Wei Chang, Dong Yu
Recent large language model (LLM)-driven chat assistant systems have integrated memory components to track user-assistant chat histories, enabling more accurate and personalized responses.
no code implementations • 13 Oct 2024 • Di wu, Siyuan Li, Chen Feng, Lu Cao, Yue Zhang, Jie Yang, Mohamad Sawan
To address these limitations, we introduce Homogeneity-Heterogeneity Disentangled Learning for neural Representations (H2DiLR), a novel framework that disentangles and learns both the homogeneity and heterogeneity from intracranial recordings across multiple subjects.
no code implementations • 10 Oct 2024 • Junzhou Chen, Xuan Wen, Ronghui Zhang, Bingtao Ren, Di wu, Zhigang Xu, Danwei Wang
Unsupervised Domain Adaptation (UDA) aims to adapt a model trained on a labeled source domain to an unlabeled target domain by addressing the domain shift.
no code implementations • 7 Oct 2024 • Maolin Li, Wei Gao, Qi Wu, Feng Shu, Cunhua Pan, Di wu
Secondly, three methods are proposed to optimize IRS phase shift, namely vector trajectory (VT) method, cross entropy vector trajectory (CE-VT) algorithm, and block coordinate descent vector trajectory (BCD-VT) algorithm.
no code implementations • 27 Sep 2024 • Seth Aycock, David Stap, Di wu, Christof Monz, Khalil Sima'an
We thus emphasise the importance of task-appropriate data for XLR languages: parallel examples for translation, and grammatical data for linguistic tasks.
no code implementations • 30 Aug 2024 • Di wu
The year 2024 witnessed a major development in the cryptocurrency industry with the long-awaited approval of spot Bitcoin exchange-traded funds (ETFs).
no code implementations • 29 Aug 2024 • Amir Farakhor, Di wu, Pingen Chen, Junmin Wang, Yebin Wang, Huazhen Fang
In particular, we capture the degradation costs of the retired battery packs through a weighted average Ah-throughput aging model.
no code implementations • 29 Aug 2024 • Yaping He, Linhao Jiang, Di wu
Initially, the model undergoes over-parameterization and training, with orthogonal regularization applied to enhance its likelihood of achieving the accuracy of the original model.
no code implementations • 28 Aug 2024 • Di wu
In the context of financial credit risk evaluation, the fairness of machine learning models has become a critical concern, especially given the potential for biased predictions that disproportionately affect certain demographic groups.
no code implementations • 19 Aug 2024 • Chengming Hu, Hao Zhou, Di wu, Xi Chen, Jun Yan, Xue Liu
Following this comprehensive feedback, our proposed TrafficLLM introduces refinement demonstration prompts, enabling the same LLM to further refine its predictions and thereby enhance prediction performance.
1 code implementation • 17 Aug 2024 • Leizhen Zhang, Lusi Li, Di wu, Sheng Chen, Yi He
The technical challenge of our setting is twofold: 1) streaming feature inputs, such that an informative feature may become obsolete or redundant for prediction if its information has been covered by other similar features that arrived prior to it, and 2) non-associational feature correlation, such that bias may be leaked from those seemingly admissible, non-protected features.
no code implementations • 14 Aug 2024 • Zhonglin Chen, Anyu Geng, Jianan Jiang, Jiwu Lu, Di wu
The proposed FFAFPM can enrich semantic information, and enhance the fusion of shallow feature and deep feature, thus false positive results have been significantly reduced.
1 code implementation • 8 Aug 2024 • Yaoxun Xu, Xingchen Song, Zhiyong Wu, Di wu, Zhendong Peng, BinBin Zhang
In automatic speech recognition, subsampling is essential for tackling diverse scenarios.
no code implementations • 5 Aug 2024 • Hao Zhou, Chengming Hu, Dun Yuan, Ye Yuan, Di wu, Xue Liu, Zhu Han, Charlie Zhang
In particular, we first introduce the communication system model, i. e., allocating radio resources and calculating link capacity to support generated content transmission, and then we present the LLM inference model to calculate the delay of content generation.
no code implementations • 1 Aug 2024 • Hao Zhou, Chengming Hu, Dun Yuan, Ye Yuan, Di wu, Xue Liu, Charlie Zhang
It avoids the complexity of tedious model training and hyper-parameter fine-tuning, which is a well-known bottleneck of many ML algorithms.
no code implementations • 30 Jul 2024 • Cheng Jiang, Gang Lu, Xue Ma, Di wu
Load data from power network clusters indicates economic development in each area, crucial for predicting regional trends and guiding power enterprise decisions.
no code implementations • 29 Jul 2024 • Shuai Zhong, Zengtong Tang, Di wu
In applications related to big data and service computing, dynamic connections tend to be encountered, especially the dynamic data of user-perspective quality of service (QoS) in Web services.
no code implementations • 28 Jul 2024 • Chuike Sun, Junzhou Chen, Yue Zhao, Hao Han, Ruihai Jing, Guang Tan, Di wu
This article presents Appformer, a novel mobile application prediction framework inspired by the efficiency of Transformer-like architectures in processing sequential data through self-attention mechanisms.
1 code implementation • 19 Jul 2024 • Ruokai Yin, Youngeun Kim, Di wu, Priyadarshini Panda
We observe that naively running a dual-sparse SNN on existing spMspM accelerators designed for dual-sparse Artificial Neural Networks (ANNs) exhibits sub-optimal efficiency.
no code implementations • 2 Jul 2024 • Yan Meng, Di wu, Christof Monz
The massive amounts of web-mined parallel data contain large amounts of noise.
no code implementations • 29 Jun 2024 • Yixin Wan, Di wu, Haoran Wang, Kai-Wei Chang
In this work, we propose DemOgraphic FActualIty Representation (DoFaiR), a benchmark to systematically quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
1 code implementation • 28 Jun 2024 • Di wu, Xiaoxian Shen, Kai-Wei Chang
Leveraging MetaKP, we design both supervised and unsupervised methods, including a multi-task fine-tuning approach and a self-consistency prompting method with large language models.
1 code implementation • 19 Jun 2024 • Di wu, Jia-Chen Gu, Fan Yin, Nanyun Peng, Kai-Wei Chang
Retrieval-augmented language models (RALMs) have shown strong performance and wide applicability in knowledge-intensive tasks.
no code implementations • 17 Jun 2024 • Jianan Jiang, Di wu, Zhilin Jiang, Weiren Yu
Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) aims to minimize the distance between sketches and corresponding images in the embedding space.
no code implementations • 12 Jun 2024 • Jun Bai, Di wu, Tristan Shelley, Peter Schubel, David Twine, John Russell, Xuesen Zeng, Ji Zhang
Material defects (MD) represent a primary challenge affecting product performance and giving rise to safety issues in related products.
no code implementations • 2 Jun 2024 • Di wu, Feng Yang, Benlian Xu, Pan Liao, Bo Liu
This paper focuses on a comprehensive survey of radar-vision (RV) fusion based on deep learning methods for 3D object detection in autonomous driving.
1 code implementation • 2 Jun 2024 • Yuwei Fu, Haichao Zhang, Di wu, Wei Xu, Benoit Boulet
In this work, we investigate how to leverage pre-trained visual-language models (VLM) for online Reinforcement Learning (RL).
no code implementations • 27 May 2024 • Chao Zhang, Haoxin Zhang, Shiwei Wu, Di wu, Tong Xu, Yan Gao, Yao Hu, Enhong Chen
We propose two ways to enhance the focus on visual information.
no code implementations • 24 May 2024 • Pan Liao, Feng Yang, Di wu, Liu Bo
We posit that MonoDETRNext establishes a new benchmark in monocular 3D object detection and opens avenues for future research.
no code implementations • 23 May 2024 • Qinghua Guan, Jinhui Ouyang, Di wu, Weiren Yu
Finally, the spatiotemporal fusion agent visualizes the system's analysis results by receiving analysis results from data analysis agents and invoking sub-visualization agents, and can provide corresponding textual descriptions based on user demands.
no code implementations • 17 May 2024 • Hao Zhou, Chengming Hu, Ye Yuan, Yufei Cui, Yili Jin, Can Chen, Haolun Wu, Dun Yuan, Li Jiang, Di wu, Xue Liu, Charlie Zhang, Xianbin Wang, Jiangchuan Liu
Then, we introduce LLM-enabled key techniques and telecom applications in terms of generation, classification, optimization, and prediction problems.
no code implementations • 13 May 2024 • Siyuan Li, Zedong Wang, Zicheng Liu, Di wu, Cheng Tan, Jiangbin Zheng, Yufei Huang, Stan Z. Li
In this paper, we introduce VQDNA, a general-purpose framework that renovates genome tokenization from the perspective of genome vocabulary learning.
no code implementations • 8 May 2024 • Jinhui Ouyang, Yijie Zhu, Xiang Yuan, Di wu
The precise prediction of multi-scale traffic is a ubiquitous challenge in the urbanization process for car owners, road administrators, and governments.
no code implementations • 27 Apr 2024 • Di wu, Shicai Fan, Xue Zhou, Li Yu, Yuzhong Deng, Jianxiao Zou, Baihong Lin
In MDPS, the problem of normal image reconstruction is mathematically modeled as multiple diffusion posterior sampling for normal images based on the devised masked noisy observation model and the diffusion-based normal image prior under Bayesian framework.
no code implementations • 25 Apr 2024 • Xingchen Song, Di wu, BinBin Zhang, Dinghao Zhou, Zhendong Peng, Bo Dang, Fuping Pan, Chao Yang
Scale has opened new frontiers in natural language processing, but at a high cost.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 22 Apr 2024 • Yinlin Zhu, Xunkai Li, Zhengyu Wu, Di wu, Miao Hu, Rong-Hua Li
Subgraph federated learning (subgraph-FL) is a new distributed paradigm that facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs.
no code implementations • 18 Apr 2024 • Jilan Samiuddin, Benoit Boulet, Di wu
Among these modules, the trajectory planner plays a pivotal role in the safety of the vehicle and the comfort of its passengers.
no code implementations • 17 Apr 2024 • Shaomu Tan, Di wu, Christof Monz
Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference.
no code implementations • 26 Mar 2024 • Youpeng Zhao, Di wu, Jun Wang
In a single GPU-CPU system, we demonstrate that under varying workloads, ALISA improves the throughput of baseline systems such as FlexGen and vLLM by up to 3X and 1. 9X, respectively.
no code implementations • 15 Mar 2024 • Di wu, Wasi Uddin Ahmad, Dejiao Zhang, Murali Krishna Ramanathan, Xiaofei Ma
Recent advances in retrieval-augmented generation (RAG) have initiated a new era in repository-level code completion.
no code implementations • 14 Mar 2024 • Jinhui Ouyang, Mingzhu Wu, Xinglin Li, Hanhui Deng, Di wu
To better extract the joint features of heterogeneous EEG data as well as enhance classification accuracy, BRIEDGE introduces an informer-based ProbSparse self-attention mechanism.
no code implementations • 13 Mar 2024 • Zhuoyin Dai, Di wu, Zhenjun Dong, Kun Li, Dingyang Ding, Sihan Wang, Yong Zeng
In this paper, to alleviate the large training overhead in millimeter wave (mmWave) beam alignment, an environment-aware and training-free beam alignment prototype is established based on a typical CKM, termed beam index map (BIM).
1 code implementation • 13 Mar 2024 • Jianan Jiang, Xinglin Li, Weiren Yu, Di wu
In the realm of fashion design, sketches serve as the canvas for expressing an artist's distinctive drawing style and creative vision, capturing intricate details like stroke variations and texture nuances.
no code implementations • 7 Mar 2024 • Jialin Chen, Zhiqiang Cai, Ke Xu, Di wu, Wei Cao
Considering the noise level limit, one crucial aspect for quantum machine learning is to design a high-performing variational quantum circuit architecture with small number of quantum gates.
no code implementations • 4 Mar 2024 • Chao Zhang, Shiwei Wu, Haoxin Zhang, Tong Xu, Yan Gao, Yao Hu, Di wu, Enhong Chen
Indeed, learning to generate hashtags/categories can potentially enhance note embeddings, both of which compress key note information into limited content.
1 code implementation • 28 Feb 2024 • Yibin Lei, Di wu, Tianyi Zhou, Tao Shen, Yu Cao, Chongyang Tao, Andrew Yates
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning.
no code implementations • 28 Feb 2024 • Juan Zhang, Jiahao Chen, Cheng Wang, Zhiwang Yu, Tangquan Qi, Di wu
Despite numerous completed studies, achieving high fidelity talking face generation with highly synchronized lip movements corresponding to arbitrary audio remains a significant challenge in the field.
1 code implementation • 21 Feb 2024 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
This study addresses the application of encoder-only Pre-trained Language Models (PLMs) in keyphrase generation (KPG) amidst the broader availability of domain-tailored encoder-only models compared to encoder-decoder models.
3 code implementations • 14 Feb 2024 • Siyuan Li, Zicheng Liu, Juanxi Tian, Ge Wang, Zedong Wang, Weiyang Jin, Di wu, Cheng Tan, Tao Lin, Yang Liu, Baigui Sun, Stan Z. Li
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization.
no code implementations • 31 Jan 2024 • Zikai Feng, Di wu, Mengxing Huang, Chau Yuen
In this paper, a novel graph-attention multi-agent trust region (GA-MATR) reinforcement learning framework is proposed to solve the multi-UAV assisted communication problem.
no code implementations • 30 Jan 2024 • Panagiotis Pagonis, Kai Hartung, Di wu, Munir Georges, Sören Gröttrup
Knowledge Tracing (KT) aims to predict the future performance of students by tracking the development of their knowledge states.
no code implementations • 26 Jan 2024 • Sicong Cao, Xiaobing Sun, Ratnadira Widyasari, David Lo, Xiaoxue Wu, Lili Bo, Jiale Zhang, Bin Li, Wei Liu, Di wu, Yixin Chen
The remarkable achievements of Artificial Intelligence (AI) algorithms, particularly in Machine Learning (ML) and Deep Learning (DL), have fueled their extensive deployment across multiple sectors, including Software Engineering (SE).
1 code implementation • 22 Jan 2024 • Di wu, Shaomu Tan, Yan Meng, David Stap, Christof Monz
Zero-shot translation aims to translate between language pairs not seen during training in Multilingual Machine Translation (MMT) and is largely considered an open problem.
no code implementations • 16 Jan 2024 • Junliang Luo, Tianyu Li, Di wu, Michael Jenkin, Steve Liu, Gregory Dudek
Large language models (LLMs), including ChatGPT, Bard, and Llama, have achieved remarkable successes over the last two years in a range of different applications.
no code implementations • 5 Jan 2024 • Osten Anderson, Nanpeng Yu, Konstantinos Oikonomou, Di wu
To this end, we propose a novel method for selecting representative periods of any length.
1 code implementation • 31 Dec 2023 • Siyuan Li, Luyuan Zhang, Zedong Wang, Di wu, Lirong Wu, Zicheng Liu, Jun Xia, Cheng Tan, Yang Liu, Baigui Sun, Stan Z. Li
As the deep learning revolution marches on, self-supervised learning has garnered increasing attention in recent years thanks to its remarkable representation learning ability and the low dependence on labeled data.
no code implementations • 25 Dec 2023 • Xicong Shen, Yang Liu, Huiqi Liu, Jue Hong, Bing Duan, Zirui Huang, Yunlong Mao, Ye Wu, Di wu
Fine-tuning is a prominent technique to adapt a pre-trained language model to downstream scenarios.
no code implementations • International Conference on Communication, Image and Signal Processing (CCISP) 2023 • Di wu, Zhihui Xin, Chao Zhang
Experiments show that the algorithm in this paper has better recovery in image edges as well as texture complex regions with higher PSNR and SSIM values and better subjective visual perception compared to the traditional gradient algorithms such as BI, Cok, Hibbard, Laroche, Hamiton, while the algorithm involves only the add-subtract and shift operations, which is suitable to be implemented on the hardware platform.
no code implementations • 19 Dec 2023 • Di wu, Yuling Jiao, Li Shen, Haizhao Yang, Xiliang Lu
In this paper, we establish a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation with $\mathcal{C}$-mixing data regarding the structure of networks, the dimension of datasets, and the concentrability of data coverage, under mild assumptions.
1 code implementation • 12 Dec 2023 • Xingshuai Huang, Di wu, Benoit Boulet
In this work, we propose DTLight, a simple yet powerful lightweight Decision Transformer-based TSC method that can learn policy from easily accessible offline datasets.
no code implementations • 6 Dec 2023 • Jimmy Li, Igor Kozlov, Di wu, Xue Liu, Gregory Dudek
This coincides with a rapid increase in the number of cell sites worldwide, driven largely by dramatic growth in cellular network traffic.
no code implementations • 27 Nov 2023 • Xinglin Li, Kun Wang, Hanhui Deng, Yuxuan Liang, Di wu
We seminally propose the concept of Shock Absorber (a type of perturbation) that enhances the robustness and stability of the original graphs against changes in an adversarial training fashion.
1 code implementation • 1 Nov 2023 • Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
no code implementations • 25 Oct 2023 • Amir Farakhor, Di wu, Yebin Wang, Huazhen Fang
Since the number of clusters is much fewer than the number of cells, the proposed approach significantly reduces the computational costs, allowing optimal power management to scale up to large-scale BESS.
no code implementations • 24 Oct 2023 • Yifan Tang, M. Rahmani Dehaghani, Pouyan Sajadi, Shahriar Bakrani Balani, Akshay Dhalpe, Suraj Panicker, Di wu, Eric Coatanea, G. Gary Wang
With measured/predicted temperature profiles of several points on the same layer, the second stage proposes a reduced order model (ROM) (intra-layer prediction model) to decompose and construct the temperature profiles of all points on the same layer, which could be used to build the temperature field of the entire layer.
no code implementations • 20 Oct 2023 • Xabi Azagirre, Akshay Balwally, Guillaume Candeli, Nicholas Chamandy, Benjamin Han, Alona King, Hyungjun Lee, Martin Loncaric, Sebastien Martin, Vijay Narasiman, Zhiwei, Qin, Baptiste Richard, Sara Smoot, Sean Taylor, Garrett van Ryzin, Di wu, Fei Yu, Alex Zamoshchin
This change was the first documented implementation of a ridesharing matching algorithm that can learn and improve in real time.
no code implementations • 19 Oct 2023 • Min Gyung Yu, Xu Ma, Bowen Huang, Karthik Devaprasad, Fredericka Brown, Di wu
The solution is determined considering both capital costs in optimal sizing and operational benefits in optimal dispatch.
no code implementations • 15 Oct 2023 • Di wu, Shaomu Tan, David Stap, Ali Araabi, Christof Monz
This paper describes the UvA-MT's submission to the WMT 2023 shared task on general machine translation.
1 code implementation • 12 Oct 2023 • Jinye Yang, Ji Xu, Di wu, Jianhang Tang, Shaobo Li, Guoyin Wang
The deviation of a classification model is caused by both class-wise and attribute-wise imbalance.
1 code implementation • 10 Oct 2023 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
DeSel improves greedy search by an average of 4. 7% semantic F1 across five datasets.
no code implementations • 5 Oct 2023 • Junliang Luo, Yi Tian Xu, Di wu, Michael Jenkin, Xue Liu, Gregory Dudek
In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics.
no code implementations • 18 Sep 2023 • Renhe Chen, Albert Lee, ZiRui Wang, Di wu, Xufeng Kou
This brief introduces a read bias circuit to improve readout yield of magnetic random access memories (MRAMs).
no code implementations • 2 Aug 2023 • Yuzhu Li, Nir Pillar, Jingxi Li, Tairan Liu, Di wu, Songyu Sun, Guangdong Ma, Kevin De Haan, Luzhe Huang, Sepehr Hamidi, Anatoly Urisman, Tal Keidar Haran, William Dean Wallace, Jonathan E. Zuckerman, Aydogan Ozcan
Histological examination is a crucial step in an autopsy; however, the traditional histochemical staining of post-mortem samples faces multiple challenges, including the inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, as well as the resource-intensive nature of chemical staining procedures covering large tissue areas, which demand substantial labor, cost, and time.
1 code implementation • ICCV 2023 • Di wu, Pengfei Chen, Xuehui Yu, Guorong Li, Zhenjun Han, Jianbin Jiao
Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects).
no code implementations • 18 Jul 2023 • Xingyue Ma, Hongying Chen, Ri He, Zhanbo Yu, Sergei Prokhorenko, Zheng Wen, Zhicheng Zhong, Jorge Iñiguez, L. Bellaiche, Di wu, Yurong Yang
The first-principles-based effective Hamiltonian scheme provides one of the most accurate modeling technique for large-scale structures, especially for ferroelectrics.
1 code implementation • 23 Jun 2023 • Amal Feriani, Di wu, Steve Liu, Greg Dudek
This work offers a comprehensive and unified framework to help researchers evaluate and design data-driven channel estimation algorithms.
no code implementations • 19 Jun 2023 • Liping Zhang, Di wu, Xin Luo
Then, based on the idea of stacking ensemble, long short-term memory is employed as an error correction module to forecast the components separately, and the forecast results are treated as new features to be fed into extreme gradient boosting for the second-step forecasting.
no code implementations • 31 May 2023 • Yan Wang, Feng Shu, Zhihong Zhuang, Rongen Dong, Qi Zhang, Di wu, Liang Yang, Jiangzhou Wang
Numerical simulation results show that a 3-bit discrete phase shifter is required to achieve a trivial performance loss for a large-scale active IRS.
1 code implementation • 24 May 2023 • Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti
We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.
1 code implementation • 23 May 2023 • Di wu, Christof Monz
Using a vocabulary that is shared across languages is common practice in Multilingual Neural Machine Translation (MNMT).
no code implementations • 22 May 2023 • Chu-Kuan Jiang, Yang-Fan Deng, Hongxiao Guo, Guang-Hao Chen, Di wu
Typical pretreated wastewater was synthesized with chemical oxygen demand of 110 mg/L, sulfate of 50 mg S/L, and varying dissolved oxygen (DO) and was fed into a moving-bed biofilm reactor (MBBR).
no code implementations • 18 May 2023 • Xingchen Song, Di wu, BinBin Zhang, Zhendong Peng, Bo Dang, Fuping Pan, Zhiyong Wu
In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding Prompt-and-Refine strategy (Figure 3), two simple but effective \textbf{training-free} methods to decrease the Token Display Time (TDT) of streaming ASR models \textbf{without any accuracy loss}.
1 code implementation • 10 May 2023 • Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di wu
In this paper, we propose a new PFL algorithm called \emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address the above problem, which leverages the parameter server (PS) to compute personalized aggregation weights based on collected models from clients.
no code implementations • 9 May 2023 • Yunchao Yang, Yipeng Zhou, Miao Hu, Di wu, Quan Z. Sheng
The challenge of this problem lies in the opaque feedback between reward budget allocation and model utility improvement of FL, making the optimal reward budget allocation complicated.
1 code implementation • 27 Mar 2023 • Di wu, Da Yin, Kai-Wei Chang
Despite the significant advancements in keyphrase extraction and keyphrase generation methods, the predominant approach for evaluation mainly relies on exact matching with human references.
no code implementations • 25 Mar 2023 • Miao Hu, Zhenxiao Luo, Amirmohammad Pasdar, Young Choon Lee, Yipeng Zhou, Di wu
Edge computing has been getting a momentum with ever-increasing data at the edge of the network.
no code implementations • 22 Mar 2023 • Borui Cai, Yong Xiang, Longxiang Gao, Di wu, He Zhang, Jiong Jin, Tom Luan
To seek a simple strategy to improve the parameter efficiency of conventional KGE models, we take inspiration from that deeper neural networks require exponentially fewer parameters to achieve expressiveness comparable to wider networks for compositional structures.
no code implementations • 22 Mar 2023 • Abhisek Konar, Di wu, Yi Tian Xu, Seowoo Jang, Steve Liu, Gregory Dudek
Engineering this reward function is challenging, because it involves the need for expert knowledge and there lacks a general consensus on the form of an optimal reward function.
no code implementations • 22 Mar 2023 • Yi Tian Xu, Jimmy Li, Di wu, Michael Jenkin, Seowoo Jang, Xue Liu, Gregory Dudek
When deploying to an unknown traffic scenario, we select a policy from the policy bank based on the similarity between the previous-day traffic of the current scenario and the traffic observed during training.
no code implementations • 14 Mar 2023 • Jikun Kang, Di wu, Ju Wang, Ekram Hossain, Xue Liu, Gregory Dudek
In cellular networks, User Equipment (UE) handoff from one Base Station (BS) to another, giving rise to the load balancing problem among the BSs.
no code implementations • 11 Mar 2023 • Xijuan Sun, Di wu, Arnaud Zinflou, Benoit Boulet
Usually, machine learning-based methods need to model the normal data distribution.
no code implementations • 25 Feb 2023 • Ruiyang Xu, Di wu, Xin Luo
Traditional feature selections need to know the feature space before learning, and online streaming feature selection (OSFS) is proposed to process streaming features on the fly.
no code implementations • 7 Feb 2023 • Huiliang Zhang, Di wu, Benoit Boulet
Safety has been recognized as the central obstacle to preventing the use of reinforcement learning (RL) for real-world applications.
no code implementations • 3 Feb 2023 • Igor Kozlov, Dmitriy Rivkin, Wei-Di Chang, Di wu, Xue Liu, Gregory Dudek
Such networks undergo frequent and often heterogeneous changes caused by network operators, who are seeking to tune their system parameters for optimal performance.
no code implementations • 12 Jan 2023 • Amir Farakhor, Di wu, Yebin Wang, Huazhen Fang
An optimal power management approach is developed to extensively exploit the merits of the proposed design.
1 code implementation • CVPR 2023 • Xinglin Li, Jiajing Chen, Jinhui Ouyang, Hanhui Deng, Senem Velipasalar, Di wu
Recent years have witnessed significant developments in point cloud processing, including classification and segmentation.
no code implementations • 20 Dec 2022 • Cheng Liang, Teng Huang, Yi He, Song Deng, Di wu, Xin Luo
The idea of the proposed MMA is mainly two-fold: 1) apply different $L_p$-norm on loss function and regularization to form different variant models in different metric spaces, and 2) aggregate these variant models.
1 code implementation • 20 Dec 2022 • Di wu, Wasi Uddin Ahmad, Kai-Wei Chang
However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models.
no code implementations • 20 Dec 2022 • Baopu Qiu, Liang Ding, Di wu, Lin Shang, Yibing Zhan, DaCheng Tao
Machine Translation Quality Estimation (QE) is the task of evaluating translation output in the absence of human-written references.
no code implementations • 16 Dec 2022 • Rongxing Hu, Kai Ye, Hyeonjin Kim, Hanpyo Lee, Ning Lu, Di wu, PJ Rehm
This paper presents a coordinative demand charge mitigation (DCM) strategy for reducing electricity consumption during system peak periods.
no code implementations • CVPR 2023 • Taotao Zhou, Kai He, Di wu, Teng Xu, Qixuan Zhang, Kuixiang Shao, Wenzheng Chen, Lan Xu, Jingyi Yu
UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.
no code implementations • 9 Dec 2022 • Kai Ye, Hyeonjin Kim, Yi Hu, Ning Lu, Di wu, PJ Rehm
This paper presents a modified sequence-to-point (S2P) algorithm for disaggregating the heat, ventilation, and air conditioning (HVAC) load from the total building electricity consumption.
no code implementations • 29 Nov 2022 • Yiyan Li, Lidong Song, Yi Hu, Hanpyo Lee, Di wu, PJ Rehm, Ning Lu
We propose a Generator structure consisting of a coarse network and a fine-tuning network.
no code implementations • 16 Nov 2022 • Qi Guo, Yong Qi, Saiyu Qi, Di wu
To our knowledge, we are the first to present an FSSL method that utilizes only 10\% labeled clients, while still achieving superior performance compared to standard federated supervised learning, which uses all clients with labeled data.
7 code implementations • 7 Nov 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di wu, ZhiYuan Chen, Jiangbin Zheng, Stan Z. Li
Notably, MogaNet hits 80. 0\% and 87. 8\% accuracy with 5. 2M and 181M parameters on ImageNet-1K, outperforming ParC-Net and ConvNeXt-L, while saving 59\% FLOPs and 17M parameters, respectively.
Ranked #1 on Instance Segmentation on COCO val2017
no code implementations • 7 Nov 2022 • Han Pyo Lee, Yiyan Li, Lidong Song, Di wu, Ning Lu
In contrast to many existing methods, we treat CVR baseline estimation as a missing data retrieval problem.
1 code implementation • 1 Nov 2022 • Xingchen Song, Di wu, Zhiyong Wu, BinBin Zhang, Yuekai Zhang, Zhendong Peng, Wenpeng Li, Fuping Pan, Changbao Zhu
In this paper, we present TrimTail, a simple but effective emission regularization method to improve the latency of streaming ASR models.
no code implementations • 31 Oct 2022 • Xingchen Song, Di wu, BinBin Zhang, Zhiyong Wu, Wenpeng Li, Dongfang Li, Pengshen Zhang, Zhendong Peng, Fuping Pan, Changbao Zhu, Zhongqin Wu
Therefore, we name it FusionFormer.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 23 Oct 2022 • Huiliang Zhang, Di wu, Benoit Boulet
The building sector has been recognized as one of the primary sectors for worldwide energy consumption.
no code implementations • 12 Oct 2022 • Wenjian Hao, Bowen Huang, Wei Pan, Di wu, Shaoshuai Mou
This paper presents a data-driven approach to approximate the dynamics of a nonlinear time-varying system (NTVS) by a linear time-varying system (LTVS), which is resulted from the Koopman operator and deep neural networks.
no code implementations • 3 Oct 2022 • Di wu, Jie Yang, Mohamad Sawan
In this survey, we assess the eligibility of more than fifty published peer-reviewed representative transfer learning approaches for EMG applications.
no code implementations • 19 Sep 2022 • Hyeonjin Kim, Kai Ye, Han Pyo Lee, Rongxing Hu, Ning Lu, Di wu, PJ Rehm
The residual load profiles are processed using ICA for HVAC load extraction.
1 code implementation • 11 Sep 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Juanxi Tian, Di wu, Cheng Tan, Weiyang Jin, Stan Z. Li
Mixup augmentation has emerged as a widely used technique for improving the generalization ability of deep neural networks (DNNs).
1 code implementation • 25 Aug 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants.
no code implementations • 23 Aug 2022 • Qi Guo, Yong Qi, Saiyu Qi, Di wu, Qian Li
Federated learning (FL) facilitates multiple clients to jointly train a machine learning model without sharing their private data.
no code implementations • 16 Aug 2022 • Yuting Ding, Di wu
In the past decade, scholars have done many research on the recovery of missing traffic data, however how to make full use of spatio-temporal traffic patterns to improve the recovery performance is still an open problem.
no code implementations • 13 Aug 2022 • Yuanyi Liu, Jia Chen, Di wu
The A2BAS algorithm consists of two sub-algorithms.
no code implementations • 2 Aug 2022 • Xin Cheng, Feng Shu, YiFan Li, Zhihong Zhuang, Di wu, Jiangzhou Wang
In this paper, optimal geometrical configurations of UAVs in received signal strength (RSS)-based localization under region constraints are investigated.
no code implementations • 2 Aug 2022 • Feilong Chen, Di wu, Jie Yang, Yi He
In many real applications such as intelligent healthcare platform, streaming feature always has some missing data, which raises a crucial challenge in conducting OSFS, i. e., how to establish the uncertain relationship between sparse streaming features and labels.
no code implementations • 14 Jul 2022 • Ningkun Zheng, Xin Qin, Di wu, Gabe Murtaugh, Bolun Xu
Combined with an optimal bidding design algorithm using dynamic programming, our paper shows that the SoC segment market model provides more accurate representations of the opportunity costs of energy storage compared to existing power-based bidding models.
2 code implementations • 7 Jul 2022 • Zelin Zang, Siyuan Li, Di wu, Ge Wang, Lei Shang, Baigui Sun, Hao Li, Stan Z. Li
To overcome the underconstrained embedding problem, we design a loss and theoretically demonstrate that it leads to a more suitable embedding based on the local flatness.
Ranked #2 on Image Classification on ImageNet-100
no code implementations • AAAI Conference on Artificial Intelligence 2022 • Yuwei Fu, Di wu, Benoit Boulet
To deal with this challenge, we propose a reinforcement learning (RL) based model combination (RLMC) framework for determining model weights in an ensemble for time series forecasting tasks.
no code implementations • 6 Jun 2022 • Jiajia Zhou, Junbin Zhuang, Yan Zheng, Di wu
As this network make "Haar Images into Fusion Images", it is called HIFI-Net.
3 code implementations • 27 May 2022 • Siyuan Li, Di wu, Fang Wu, Zelin Zang, Stan. Z. Li
We then propose an Architecture-Agnostic Masked Image Modeling framework (A$^2$MIM), which is compatible with both Transformers and CNNs in a unified way.
no code implementations • 26 May 2022 • Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg
To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks.
no code implementations • 24 May 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
In this work, we propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL.
1 code implementation • 19 May 2022 • Jiuqi Elise Zhang, Di wu, Benoit Boulet
Time series anomaly detection has been recognized as of critical importance for the reliable and efficient operation of real-world systems.
no code implementations • 1 May 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
One of the key points of EFD is developing a generic model to extract robust and discriminative features from different equipment for early fault detection.
no code implementations • 27 Apr 2022 • Wenbin Song, Di wu, Weiming Shen, Benoit Boulet
To address this problem, many transfer learning based EFD methods utilize historical data to learn transferable domain knowledge and conduct early fault detection on new target bearings.
1 code implementation • 20 Apr 2022 • Di wu, Siyuan Li, Jie Yang, Mohamad Sawan
To address the appetite for data in deep learning, we present Neuro-BERT, a self-supervised pre-training framework of neurological signals based on masked autoencoding in the Fourier domain.
no code implementations • 16 Apr 2022 • Di wu, Peng Zhang, Yi He, Xin Luo
High-dimensional and sparse (HiDS) matrices are omnipresent in a variety of big data-related applications.
no code implementations • 16 Apr 2022 • Di wu, Yi He, Xin Luo
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system.
no code implementations • 2 Apr 2022 • Jia Chen, Di wu, Xin Luo
High-dimensional and sparse (HiDS) matrices are frequently adopted to describe the complex relationships in various big data-related systems and applications.
3 code implementations • 29 Mar 2022 • BinBin Zhang, Di wu, Zhendong Peng, Xingchen Song, Zhuoyuan Yao, Hang Lv, Lei Xie, Chao Yang, Fuping Pan, Jianwei Niu
Recently, we made available WeNet, a production-oriented end-to-end speech recognition toolkit, which introduces a unified two-pass (U2) framework and a built-in runtime to address the streaming and non-streaming decoding modes in a single model.
no code implementations • 25 Mar 2022 • Tao Fu, Huifen Zhou, Xu Ma, Z. Jason Hou, Di wu
In this study, we develop a supervised machine learning approach to generate 1) the probability of the next operation day containing the peak hour of the month and 2) the probability of an hour to be the peak hour of the day.
2 code implementations • CVPR 2022 • Xuehui Yu, Pengfei Chen, Di wu, Najmul Hassan, Guorong Li, Junchi Yan, Humphrey Shi, Qixiang Ye, Zhenjun Han
In this study, we propose a POL method using coarse point annotations, relaxing the supervision signals from accurate key points to freely spotted points.
1 code implementation • 15 Mar 2022 • Di wu, Wasi Uddin Ahmad, Sunipa Dev, Kai-Wei Chang
State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data.
no code implementations • 11 Mar 2022 • Di wu, Cheng Chen, Xiujun Chen, Junwei Pan, Xun Yang, Qing Tan, Jian Xu, Kuang-Chih Lee
In order to address the unstable traffic pattern challenge and achieve the optimal overall outcome, we propose a multi-agent reinforcement learning method to adjust the bids from each guaranteed contract, which is simple, converging efficiently and scalable.
no code implementations • 25 Feb 2022 • Di wu, Jie Yang, Mohamad Sawan
The proposed training scheme significantly improves the performance of patient-specific seizure predictors and bridges the gap between patient-specific and patient-independent predictors.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Ce Yang, Weihao Gao, Di wu, Chong Wang
Simulation of the dynamics of physical systems is essential to the development of both science and engineering.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.