1 code implementation • 16 Feb 2024 • Yuanzhen Xie, Xinzhou Jin, Tao Xie, Mingxiong Lin, Liang Chen, Chenyun Yu, Lei Cheng, Chengxiang Zhuo, Bo Hu, Zang Li
To improve the contextual learning capabilities of LLMs in text-to-SQL, a workflow paradigm method is proposed, aiming to enhance the attention and problem-solving scope of LLMs through decomposition.
no code implementations • 30 Jan 2024 • Hao Zhang, Qingfeng Lin, Yang Li, Lei Cheng, Yik-Chung Wu
This problem is even more severe in cell-free networks as there are many of these parameters to be acquired.
1 code implementation • 9 Aug 2023 • Siyuan Li, Lei Cheng, Ting Zhang, Hangfang Zhao, Jianlong Li
Accurately reconstructing a three-dimensional ocean sound speed field (3D SSF) is essential for various ocean acoustic applications, but the sparsity and uncertainty of sound speed samples across a vast ocean region make it a challenging task.
no code implementations • 5 Jul 2023 • Xingyu Ji, Lei Cheng, Hangfang Zhao
The performance of our proposed method is compared with benchmark methods, including compressive sensing (CS), BCS, and Laplacian-prior BCS (L-BCS).
no code implementations • 19 Jun 2023 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu, H. Vincent Poor
A probabilistic model is built to induce the common sparsity in the spatial domain, and the first-order Taylor expansion is adopted to get rid of the grid mismatch in the dictionaries.
1 code implementation • 19 Jun 2023 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu
Tensor train (TT) representation has achieved tremendous success in visual data completion tasks, especially when it is combined with tensor folding.
no code implementations • 3 Apr 2023 • Wen Shen, Lei Cheng, Yuxiao Yang, Mingjie Li, Quanshi Zhang
In this paper, we explain the inference logic of large language models (LLMs) as a set of symbolic concepts.
no code implementations • 31 Jan 2023 • Xin Dong, Ruize Wu, Chao Xiong, Hai Li, Lei Cheng, Yong He, Shiyou Qian, Jian Cao, Linjian Mo
GDOD decomposes gradients into task-shared and task-conflict components explicitly and adopts a general update rule for avoiding interference across all task gradients.
1 code implementation • 15 Dec 2022 • Zhidi Lin, Lei Cheng, Feng Yin, Lexi Xu, Shuguang Cui
Gaussian process state-space model (GPSSM) is a fully probabilistic state-space model that has attracted much attention over the past decade.
1 code implementation • 20 Oct 2022 • Zhuo Chen, Wen Zhang, Yufeng Huang, Mingyang Chen, Yuxia Geng, Hongtao Yu, Zhen Bi, Yichi Zhang, Zhen Yao, Wenting Song, Xinliang Wu, Yi Yang, Mingyi Chen, Zhaoyang Lian, YingYing Li, Lei Cheng, Huajun Chen
In this work, we share our experience on tele-knowledge pre-training for fault analysis, a crucial task in telecommunication applications that requires a wide range of knowledge normally found in both machine log data and product documents.
1 code implementation • 12 Jun 2022 • Ruslan Khalitov, Tong Yu, Lei Cheng, Zhirong Yang
Sequential data naturally have different lengths in many domains, with some very long sequences.
Ranked #4 on Long-range modeling on LRA
no code implementations • 11 Jun 2022 • Xiaodan Shao, Lei Cheng, Xiaoming Chen, Chongwen Huang, Derrick Wing Kwan Ng
Then, by associating the data sequences to multiple rank-one tensors and exploiting the angular sparsity of the RIS-BS channel, the detection problem is cast as a high-order coupled tensor decomposition problem without the need of exploiting pilot sequences.
no code implementations • 28 May 2022 • Lei Cheng, Feng Yin, Sergios Theodoridis, Sotirios Chatzis, Tsung-Hui Chang
However, a come back of Bayesian methods is taking place that sheds new light on the design of deep neural networks, which also establish firm links with Bayesian models and inspire new paths for unsupervised learning, such as Bayesian tensor decomposition.
1 code implementation • CVPR 2022 • Tong Yu, Ruslan Khalitov, Lei Cheng, Zhirong Yang
The overall computing cost of the new building block is as low as $O(N \log N)$.
Ranked #18 on Long-range modeling on LRA
no code implementations • 2 Apr 2022 • Kai Li, Ying Li, Lei Cheng, Qingjiang Shi, Zhi-Quan Luo
The downlink channel covariance matrix (CCM) acquisition is the key step for the practical performance of massive multiple-input and multiple-output (MIMO) systems, including beamforming, channel tracking, and user scheduling.
no code implementations • 18 Mar 2022 • Yangge Chen, Lei Cheng, Yik-Chung Wu
Recently, there is a revival of interest in low-rank matrix completion-based unsupervised learning through the lens of dual-graph regularization, which has significantly improved the performance of multidisciplinary machine learning tasks such as recommendation systems, genotype imputation and image inpainting.
no code implementations • 23 Feb 2022 • Chunhui Zhang, Xiaoming Yuan, Qianyun Zhang, Guangxu Zhu, Lei Cheng, Ning Zhang
To further adapt to both various data distributions and different types of devices with heterogeneous embedded hardware platforms, inspired by meta-learning, a Cluster Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to achieve device-aware NAS, in the sense that each device can learn a tailored deep learning model for its particular data distribution and hardware constraint.
no code implementations • 21 Jan 2022 • Lei Cheng, Xingyu Ji, Hangfang Zhao, Jianlong Li, Wen Xu
In particular, a tensor-based basis function learning framework is proposed, which can include the classical basis functions (using EOFs and/or Fourier basis functions) as its special cases.
1 code implementation • 6 Jan 2022 • Lei Cheng, Ruslan Khalitov, Tong Yu, Zhirong Yang
Recurrent Neural Networks, Transformers, and Convolutional Neural Networks are three major techniques for learning from sequential data.
1 code implementation • 19 Nov 2021 • Chenglin Li, Mingjun Zhao, Huanming Zhang, Chenyun Yu, Lei Cheng, Guoqiang Shu, Beibei Kong, Di Niu
The learned GUR captures the overall preferences and characteristics of a user and thus can be used to augment the behavior data and improve recommendations in any single domain in which the user is involved.
1 code implementation • 16 Sep 2021 • Ruslan Khalitov, Tong Yu, Lei Cheng, Zhirong Yang
The sparse factorization method is tested for a variety of synthetic and real-world square matrices.
Ranked #17 on Long-range modeling on LRA
no code implementations • ACL 2021 • Bo Zhang, XiaoMing Zhang, Yun Liu, Lei Cheng, Zhoujun Li
Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge of source domain to the unlabeled target domain.
no code implementations • 31 May 2021 • Chenglin Li, Carrie Lu Tong, Di Niu, Bei Jiang, Xiao Zuo, Lei Cheng, Jian Xiong, Jianming Yang
Deep learning models for human activity recognition (HAR) based on sensor data have been heavily studied recently.
no code implementations • 24 Jan 2021 • Lei Cheng, Qingjiang Shi
Channel estimation has long been deemed as one of the most critical problems in three-dimensional (3D) massive multiple-input multiple-output (MIMO), which is recognized as the leading technology that enables 3D spatial signal processing in the fifth-generation (5G) wireless communications and beyond.
no code implementations • 6 Nov 2020 • Chunhui Zhang, Yongyuan Liang, Xiaoming Yuan, Lei Cheng
To further adapt for various data distributions of clients, inspired by meta-learning, a cluster Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to achieve client-aware NAS, in the sense that each client can learn a tailored deep learning model for its particular data distribution.
no code implementations • 22 Oct 2020 • Kai Li, Ying Li, Lei Cheng, Qingjiang Shi, Zhi-Quan Luo
There is a fundamental trade-off between the channel representation resolution of codebooks and the overheads of feedback communications in the fifth generation new radio (5G NR) frequency division duplex (FDD) massive multiple-input and multiple-output (MIMO) systems.
no code implementations • 13 Oct 2020 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu
Tensor train (TT) decomposition, a powerful tool for analyzing multidimensional data, exhibits superior performance in many machine learning tasks.
no code implementations • 7 Sep 2020 • Dan Liu, Shuai Wang, Zhigang Wen, Lei Cheng, Miaowen Wen, Yik-Chung Wu
However, different devices may transmit different data for different machine learning jobs and a fundamental question is how to jointly plan the UGV path, the devices' energy consumption, and the number of samples for different jobs?
no code implementations • 5 Sep 2020 • Lei Cheng, Zhongtao Chen, Qingjiang Shi, Yik-Chung Wu, Sergios Theodoridis
However, the optimal determination of a tensor rank is known to be a non-deterministic polynomial-time hard (NP-hard) task.
no code implementations • 20 Jan 2019 • Denny Wu, Hirofumi Kobayashi, Charles Ding, Lei Cheng, Keisuke Goda Marzyeh Ghassemi
A crucial challenge in image-based modeling of biomedical data is to identify trends and features that separate normality and pathology.