no code implementations • EMNLP 2021 • Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, Jun Zhang
Out-of-Distribution (OOD) detection is an important problem in natural language processing (NLP).
1 code implementation • ACL 2022 • Xu Han, Guoyang Zeng, Weilin Zhao, Zhiyuan Liu, Zhengyan Zhang, Jie zhou, Jun Zhang, Jia Chao, Maosong Sun
In recent years, large-scale pre-trained language models (PLMs) containing billions of parameters have achieved promising results on various NLP tasks.
1 code implementation • Findings (EMNLP) 2021 • Jun Zhang, Yan Yang, Chencai Chen, Liang He, Zhou Yu
Recommendation dialogs require the system to build a social bond with users to gain trust and develop affinity in order to increase the chance of a successful recommendation.
1 code implementation • 12 Sep 2024 • Jingwen Tong, Jiawei Shao, Qiong Wu, Wei Guo, Zijian Li, Zehong Lin, Jun Zhang
Wireless networks are increasingly facing challenges due to their expanding scale and complexity.
no code implementations • 5 Sep 2024 • Jiayue Liu, Tianqi Mao, Dongxuan He, Yang Yang, Zhen Gao, Dezhi Zheng, Jun Zhang
The escalating interests on underwater exploration/reconnaissance applications have motivated high-rate data transmission from underwater to airborne relaying platforms, especially under high-sea scenarios.
no code implementations • 29 Aug 2024 • Fangfu Liu, Wenqiang Sun, HanYang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, Yueqi Duan
Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos.
no code implementations • 28 Aug 2024 • Yuchang Sun, Yuexiang Xie, Bolin Ding, Yaliang Li, Jun Zhang
Federated learning (FL) has emerged as a promising paradigm for fine-tuning foundation models using distributed data in a privacy-preserving manner.
1 code implementation • 22 Aug 2024 • Zhaochen Su, Jun Zhang, Xiaoye Qu, Tong Zhu, Yanshu Li, Jiashuo Sun, Juntao Li, Min Zhang, Yu Cheng
Only a few research explored the conflicts between the inherent knowledge of LLMs and the retrieved contextual knowledge.
no code implementations • 5 Aug 2024 • Guo-Yun Lin, Zong-Gan Chen, Yuncheng Jiang, Zhi-Hui Zhan, Jun Zhang
First, a landscape-aware peak exploration helps each individual evolve adaptively to locate a peak and simulates the regions of the found peaks according to search history to avoid an individual locating a found peak.
no code implementations • 30 Jul 2024 • Jiawei Shao, Teng Li, Jun Zhang
Despite its potential to improve perception accuracy and robustness, the large amount of raw sensor data inevitably results in high communication overhead.
1 code implementation • 27 Jul 2024 • Shigang Liu, Di Cao, Junae Kim, Tamas Abraham, Paul Montague, Seyit Camtepe, Jun Zhang, Yang Xiang
Recently, deep learning has demonstrated promising results in enhancing the accuracy of vulnerability detection and identifying vulnerabilities in software.
no code implementations • 15 Jul 2024 • Zhening Liu, Xinjie Zhang, Jiawei Shao, Zehong Lin, Jun Zhang
With the rapid advancement of stereo vision technologies, stereo image compression has emerged as a crucial field that continues to draw significant attention.
1 code implementation • 11 Jul 2024 • Wenwen Min, Zhiceng Shi, Jun Zhang, Jun Wan, Changmiao Wang
In this paper, we propose \textbf{mclSTExp}, a multimodal contrastive learning with Transformer and Densenet-121 encoder for Spatial Transcriptomics Expression prediction.
no code implementations • 5 Jul 2024 • Ye Bai, Jingping Chen, Jitong Chen, Wei Chen, Zhuo Chen, Chuang Ding, Linhao Dong, Qianqian Dong, Yujiao Du, Kepan Gao, Lu Gao, Yi Guo, Minglun Han, Ting Han, Wenchao Hu, Xinying Hu, Yuxiang Hu, Deyu Hua, Lu Huang, Mingkun Huang, Youjia Huang, Jishuo Jin, Fanliu Kong, Zongwei Lan, Tianyu Li, Xiaoyang Li, Zeyang Li, Zehua Lin, Rui Liu, Shouda Liu, Lu Lu, Yizhou Lu, Jingting Ma, Shengtao Ma, Yulin Pei, Chen Shen, Tian Tan, Xiaogang Tian, Ming Tu, Bo wang, Hao Wang, Yuping Wang, Yuxuan Wang, Hanzhang Xia, Rui Xia, Shuangyi Xie, Hongmin Xu, Meng Yang, Bihong Zhang, Jun Zhang, Wanyi Zhang, Yang Zhang, Yawei Zhang, Yijie Zheng, Ming Zou
Modern automatic speech recognition (ASR) model is required to accurately transcribe diverse speech signals (from different domains, languages, accents, etc) given the specific contextual information in various application scenarios.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 3 Jul 2024 • Zhenyu He, Jun Zhang, Shengjie Luo, Jingjing Xu, Zhi Zhang, Di He
Simply encoding the edited subsequence and integrating it to the original KV cache meets the temporal confusion problem, leading to significantly worse performance.
1 code implementation • 2 Jul 2024 • Fei Shen, Hu Ye, Sibo Liu, Jun Zhang, Cong Wang, Xiao Han, Wei Yang
Moreover, RCDMs can generate consistent stories with a single forward inference compared to autoregressive models.
no code implementations • 26 Jun 2024 • Zifan Liu, Xinran Li, Shibo Chen, Gen Li, Jiashuo Jiang, Jun Zhang
However, further improvement of RL algorithms in the IC domain is impeded due to two limitations of online experience.
no code implementations • 25 Jun 2024 • Van Tung Pham, Yist Lin, Tao Han, Wei Li, Jun Zhang, Lu Lu, Yuxuan Wang
Finally, we explore training and inference methods to mitigate high insertion errors.
no code implementations • 24 Jun 2024 • Yifan Ma, Hengtao He, Shenghui Song, Jun Zhang, Khaled B. Letaief
In frequency-division duplex (FDD) massive multiple-input multiple-output (MIMO) systems, the growing number of base station antennas leads to prohibitive feedback overhead for downlink channel state information (CSI).
1 code implementation • 20 Jun 2024 • Jie Feng, Jun Zhang, Junbo Yan, Xin Zhang, Tianjian Ouyang, Tianhui Liu, Yuwei Du, Siqi Guo, Yong Li
Based on CitySim, we design 7 tasks in 2 categories of perception-understanding and decision-making group to evaluate the capability of LLMs as city-scale world model for urban domain.
1 code implementation • 20 Jun 2024 • Zhaochen Su, Jun Zhang, Tong Zhu, Xiaoye Qu, Juntao Li, Min Zhang, Yu Cheng
Therefore, we propose a crucial question: Can we build a universal framework to handle a variety of temporal reasoning tasks?
1 code implementation • 19 Jun 2024 • Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, Zhizheng Wu
We also conduct a comprehensive evaluation using objective evaluation methods (e. g. BLEU and ROUGE), subjective evaluations and LLM-based metrics for the generated responses.
1 code implementation • 15 Jun 2024 • Jun Zhang, Wenxuan Ao, Junbo Yan, Depeng Jin, Yong Li
Based on the simulator, we implement a set of microscopic and macroscopic controllable objects and metrics to support most typical transportation system optimization scenarios.
1 code implementation • 13 Jun 2024 • Zhaochen Su, Juntao Li, Jun Zhang, Tong Zhu, Xiaoye Qu, Pan Zhou, Yan Bowen, Yu Cheng, Min Zhang
Temporal reasoning is fundamental for large language models (LLMs) to comprehend the world.
no code implementations • 12 Jun 2024 • Jingwen Tong, Xinran Li, Liqun Fu, Jun Zhang, Khaled B. Letaief
In this paper, we study the cooperative resource allocation problem with unknown system dynamics of MRPs.
no code implementations • 12 Jun 2024 • Yerbolat Khassanov, Zhipeng Chen, Tianfeng Chen, Tze Yuang Chong, Wei Li, Jun Zhang, Lu Lu, Yuxuan Wang
This paper addresses challenges in integrating new languages into a pre-trained multilingual automatic speech recognition (mASR) system, particularly in scenarios where training data for existing languages is limited or unavailable.
no code implementations • 7 Jun 2024 • Jianbo Dong, Bin Luo, Jun Zhang, Pengcheng Zhang, Fei Feng, Yikai Zhu, Ang Liu, Zian Chen, Yi Shi, Hairong Jiao, Gang Lu, Yu Guan, Ennan Zhai, Wencong Xiao, Hanyu Zhao, Man Yuan, Siran Yang, Xiang Li, Jiamang Wang, Rui Men, Jianwei Zhang, Huang Zhong, Dennis Cai, Yuan Xie, Binzhang Fu
By leveraging this feature, C4 can rapidly identify the faulty components, swiftly isolate the anomaly, and restart the task, thereby avoiding resource wastage caused by delays in anomaly detection.
no code implementations • 6 Jun 2024 • Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Ming Ding, Chao Chen, Kok-Leong Ong, Jun Zhang, Yang Xiang
Deep Learning (DL) powered by Deep Neural Networks (DNNs) has revolutionized various domains, yet understanding the intricacies of DNN decision-making and learning processes remains a significant challenge.
no code implementations • 4 Jun 2024 • Cong Wang, Kuan Tian, Jun Zhang, Yonghang Guan, Feng Luo, Fei Shen, Zhiwei Jiang, Qing Gu, Xiao Han, Wei Yang
In our work on portrait video generation, we identified audio signals as particularly weak, often overshadowed by stronger signals such as facial pose and reference image.
no code implementations • 3 Jun 2024 • Yang Liu, Xiaofei Li, Jun Zhang, Shengze Hu, Jun Lei
The increasing difficulty in accurately detecting forged images generated by AIGC(Artificial Intelligence Generative Content) poses many risks, necessitating the development of effective methods to identify and further locate forged areas.
no code implementations • 3 Jun 2024 • Zijian Li, Qingyan Guo, Jiawei Shao, Lei Song, Jiang Bian, Jun Zhang, Rui Wang
A graph neural network (GNN) is then leveraged to exploit the relationships between passages and improve the retrieval of supporting passages.
no code implementations • 2 Jun 2024 • Wenqiang Sun, Zhengyi Wang, Shuo Chen, Yikai Wang, Zilong Chen, Jun Zhu, Jun Zhang
We first analyze the role of triplanes in feed-forward methods and find that the inconsistent multi-view images introduce high-frequency artifacts on triplanes, leading to low-quality 3D meshes.
1 code implementation • 28 May 2024 • Xinran Li, Zifan Liu, Shibo Chen, Jun Zhang
In multi-agent reinforcement learning (MARL), effective exploration is critical, especially in sparse reward environments.
1 code implementation • 27 May 2024 • Cong Wang, Kuan Tian, Yonghang Guan, Jun Zhang, Zhiwei Jiang, Fei Shen, Xiao Han, Qing Gu, Wei Yang
In this paper, we propose a novel ensembling method, Adaptive Feature Aggregation (AFA), which dynamically adjusts the contributions of multiple models at the feature level according to various states (i. e., prompts, initial noises, denoising steps, and spatial locations), thereby keeping the advantages of multiple diffusion models, while suppressing their disadvantages.
no code implementations • 27 May 2024 • Xiaolu Wang, Yuchang Sun, Hoi-To Wai, Jun Zhang
We consider the distributed learning problem with data dispersed across multiple workers under the orchestration of a central server.
no code implementations • 27 May 2024 • Jiawei Shao, Jingwen Tong, Qiong Wu, Wei Guo, Zijian Li, Zehong Lin, Jun Zhang
To empower LLMs with knowledge and expertise in the wireless domain, this paper proposes WirelessLLM, a comprehensive framework for adapting and enhancing LLMs to address the unique challenges and requirements of wireless communication networks.
2 code implementations • 27 May 2024 • Shenyuan Gao, Jiazhi Yang, Li Chen, Kashyap Chitta, Yihang Qiu, Andreas Geiger, Jun Zhang, Hongyang Li
In this paper, we present Vista, a generalizable driving world model with high fidelity and versatile controllability.
no code implementations • 16 May 2024 • Tianqu Kang, Lumin Liu, Hengtao He, Jun Zhang, S. H. Song, Khaled B. Letaief
To enhance privacy, FL can be combined with Differential Privacy (DP), which involves adding Gaussian noise to the model weights.
1 code implementation • 15 May 2024 • Hongru Li, Jiawei Shao, Hengtao He, Shenghui Song, Jun Zhang, Khaled B. Letaief
Specifically, we propose an invariant feature encoding approach based on the IB principle and IRM framework for domainshift generalization, which aims to find the causal relationship between the input data and task result by minimizing the complexity and domain dependence of the encoded feature.
1 code implementation • 10 May 2024 • Li Ling, Jun Zhang, Nils Bore, John Folkesson, Anna Wåhlin
However, in the underwater domain, most registration of multibeam echo-sounder (MBES) point cloud data are still performed using classical methods in the iterative closest point (ICP) family.
no code implementations • 6 May 2024 • Leonard Bruns, Jun Zhang, Patric Jensfelt
Existing neural field-based SLAM methods typically employ a single monolithic field as their scene representation.
no code implementations • CVPR 2024 • Xingtong Ge, Jixiang Luo, Xinjie Zhang, Tongda Xu, Guo Lu, Dailan He, Jing Geng, Yan Wang, Jun Zhang, Hongwei Qin
Prior research on deep video compression (DVC) for machine tasks typically necessitates training a unique codec for each specific task, mandating a dedicated decoder per task.
no code implementations • 6 Apr 2024 • Liqun Fu, Jingwen Tong, Tongtong Lin, Jun Zhang
Due to the learned objective model is typically non-convex and challenging to solve in real-time, we leverage the Lyapunov optimization to decouple the long-term average constraint and apply the prime-dual method to solve this decoupled resource allocation problem.
no code implementations • 30 Mar 2024 • Jingwen Tong, Zhenzhen Chen, Liqun Fu, Jun Zhang, Zhu Han
To address the challenges posed by system and data heterogeneities in the FL process, we study a goal-directed client selection problem based on the model analytics framework by selecting a subset of clients for the model training.
no code implementations • 28 Mar 2024 • Xinyu Bian, Yuhao Liu, Yizhou Xu, Tianqi Hou, Wenjie Wang, Yuyi Mao, Jun Zhang
Simulation results demonstrate the effectiveness of our proposed decentralized precoding scheme, which achieves performance similar to the optimal centralized precoding scheme.
no code implementations • 15 Mar 2024 • Yuhao Liu, Xinyu Bian, Yizhou Xu, Tianqi Hou, Wenjie Wang, Yuyi Mao, Jun Zhang
In order to control the inter-cell interference for a multi-cell multi-user multiple-input multiple-output network, we consider the precoder design for coordinated multi-point with downlink coherent joint transmission.
no code implementations • 15 Mar 2024 • Xiaohang Yu, Zhengxian Yang, Shi Pan, Yuqi Han, Haoxiang Wang, Jun Zhang, Shi Yan, Borong Lin, Lei Yang, Tao Yu, Lu Fang
We have built a custom mobile multi-camera large-space dense light field capture system, which provides a series of high-quality and sufficiently dense light field images for various scenarios.
2 code implementations • CVPR 2024 • Jiazhi Yang, Shenyuan Gao, Yihang Qiu, Li Chen, Tianyu Li, Bo Dai, Kashyap Chitta, Penghao Wu, Jia Zeng, Ping Luo, Jun Zhang, Andreas Geiger, Yu Qiao, Hongyang Li
In this paper, we introduce the first large-scale video prediction model in the autonomous driving discipline.
no code implementations • 13 Mar 2024 • Xinjie Zhang, Shenyuan Gao, Zhening Liu, Jiawei Shao, Xingtong Ge, Dailan He, Tongda Xu, Yan Wang, Jun Zhang
Existing learning-based stereo image codec adopt sophisticated transformation with simple entropy models derived from single image codecs to encode latent representations.
1 code implementation • 13 Mar 2024 • Xinjie Zhang, Xingtong Ge, Tongda Xu, Dailan He, Yan Wang, Hongwei Qin, Guo Lu, Jing Geng, Jun Zhang
In response, we propose a groundbreaking paradigm of image representation and compression by 2D Gaussian Splatting, named GaussianImage.
no code implementations • 4 Mar 2024 • Hongshu Guo, Yining Ma, Zeyuan Ma, Jiacheng Chen, Xinglin Zhang, Zhiguang Cao, Jun Zhang, Yue-Jiao Gong
As a proof-of-principle study, we apply this framework to a group of Differential Evolution algorithms.
1 code implementation • CVPR 2024 • Xinjie Zhang, Ren Yang, Dailan He, Xingtong Ge, Tongda Xu, Yan Wang, Hongwei Qin, Jun Zhang
Implicit neural representations (INRs) have emerged as a promising approach for video storage and processing, showing remarkable versatility across various video tasks.
no code implementations • 28 Feb 2024 • Xinyu Bian, Yuyi Mao, Jun Zhang
Grant-free random access (RA) has been recognized as a promising solution to support massive connectivity due to the removal of the uplink grant request procedures.
1 code implementation • 27 Feb 2024 • Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong
The ability of Large Language Models (LLMs) to process and generate coherent text is markedly weakened when the number of input tokens exceeds their pretraining length.
no code implementations • 17 Feb 2024 • Xiaolu Wang, Zijian Li, Shi Jin, Jun Zhang
Federated learning (FL) is an emerging distributed training paradigm that aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
no code implementations • 15 Feb 2024 • Tailin Zhou, Jiadong Yu, Jun Zhang, Danny H. K. Tsang
This paper investigates resource allocation to provide heterogeneous users with customized virtual reality (VR) services in a mobile edge computing (MEC) system.
no code implementations • 9 Feb 2024 • Zhuoran Zheng, Jun Zhang
In endoscopic imaging, the recorded images are prone to exposure abnormalities, so maintaining high-quality images is important to assist healthcare professionals in performing decision-making.
no code implementations • 8 Feb 2024 • Yasas Supeksala, Dinh C. Nguyen, Ming Ding, Thilina Ranbaduge, Calson Chua, Jun Zhang, Jun Li, H. Vincent Poor
In this light, it is crucial to utilize information in learning processes that are either distributed or owned by different entities.
no code implementations • 29 Jan 2024 • Wenqiang Sun, Teng Li, Zehong Lin, Jun Zhang
Recently, text-to-image diffusion models have demonstrated impressive ability to generate high-quality images conditioned on the textual input.
1 code implementation • 25 Jan 2024 • Jian Kuang, Wenjing Li, Fang Li, Jun Zhang, Zhongcheng Wu
Distracted driver activity recognition plays a critical role in risk aversion-particularly beneficial in intelligent transportation systems.
no code implementations • 24 Jan 2024 • Yuchang Sun, Marios Kountouris, Jun Zhang
We show that the generalization performance of a client can be improved only by collaborating with other clients that have more training data and similar data distribution.
no code implementations • CVPR 2024 • Guohao Peng, Heshan Li, Yangyang Zhao, Jun Zhang, Zhenyu Wu, Pengyu Zheng, Danwei Wang
To validate TransLoc4D we construct two datasets and set up benchmarks for 4D radar place recognition.
no code implementations • 31 Dec 2023 • Sihao Yuan, Xu Han, Jun Zhang, Zhaoxin Xie, Cheng Fan, Yunlong Xiao, Yi Qin Gao, Yi Isaac Yang
We applied this approach to study a Claisen rearrangement reaction and a Carbonyl insertion reaction catalyzed by Manganese.
1 code implementation • 25 Dec 2023 • Xinran Li, Jun Zhang
Following this, agents utilize attention mechanisms in the second stage to selectively generate messages personalized for the receivers.
no code implementations • 21 Dec 2023 • Ruoxiao Cao, Hengtao He, Xianghao Yu, Shenghui Song, Kaibin Huang, Jun Zhang, Yi Gong, Khaled B. Letaief
To address the joint channel estimation and cooperative localization problem for near-field UM-MIMO systems, we propose a variational Newtonized near-field channel estimation (VNNCE) algorithm and a Gaussian fusion cooperative localization (GFCL) algorithm.
1 code implementation • 19 Dec 2023 • Fengli Xu, Jun Zhang, Chen Gao, Jie Feng, Yong Li
Urban environments, characterized by their complex, multi-layered networks encompassing physical, social, economic, and environmental dimensions, face significant challenges in the face of rapid urbanization.
no code implementations • 18 Dec 2023 • Jun Zhang, Shuyang Jiang, Jiangtao Feng, Lin Zheng, Lingpeng Kong
Given that orthogonal memory compresses global information, we further dissect the context to amplify fine-grained local information.
no code implementations • 16 Dec 2023 • Wentao Yu, Hengtao He, Xianghao Yu, Shenghui Song, Jun Zhang, Ross Murch, Khaled B. Letaief
In this paper, we address the fundamental challenge of designing a low-complexity Bayes-optimal channel estimator in near-field HMIMO systems operating in unknown EM environments.
no code implementations • 1 Dec 2023 • Yuyi Mao, Xianghao Yu, Kaibin Huang, Ying-Jun Angela Zhang, Jun Zhang
Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference.
1 code implementation • 28 Nov 2023 • Biao Xu, Haijun Fu, Shasha Huang, Shihua Ma, Yaoxu Xiong, Jun Zhang, Xuepeng Xiang, Wenyu Lu, Ji-Jung Kai, Shijun Zhao
Interstitial diffusion is a pivotal process that governs the phase stability and irradiation response of materials in non-equilibrium conditions.
no code implementations • 21 Nov 2023 • Zhen Chen, Yuhao Zhai, Jun Zhang, Jinqiao Wang
Specifically, we propose an efficient multi-scale surgical temporal action (MS-STA) module, which integrates visual features with spatial and temporal knowledge of surgical actions at the cost of 2D networks.
1 code implementation • 20 Nov 2023 • Lei Geng, Xu Yan, Ziqiang Cao, Juntao Li, Wenjie Li, Sujian Li, Xinjie Zhou, Yang Yang, Jun Zhang
We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora.
no code implementations • 15 Nov 2023 • Jin Qiu, Lu Huang, Boyu Li, Jun Zhang, Lu Lu, Zejun Ma
Deep biasing for the Transducer can improve the recognition performance of rare words or contextual entities, which is essential in practical applications, especially for streaming Automatic Speech Recognition (ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 14 Nov 2023 • Wentao Yu, Hengtao He, Xianghao Yu, Shenghui Song, Jun Zhang, Ross D. Murch, Khaled B. Letaief
Holographic MIMO (HMIMO) has recently been recognized as a promising enabler for future 6G systems through the use of an ultra-massive number of antennas in a compact space to exploit the propagation characteristics of the electromagnetic (EM) channel.
no code implementations • 7 Nov 2023 • Yao Zhang, Zhiwen Yu, Jun Zhang, Liang Wang, Tom H. Luan, Bin Guo, Chau Yuen
Nevertheless, existing MARL algorithms ignore effective information aggregation which is fundamental for improving the learning capacity of decentralized agents.
1 code implementation • 18 Oct 2023 • Feng Luo, Jinxi Xiang, Jun Zhang, Xiao Han, Wei Yang
To alleviate the huge computational cost required by pixel-based diffusion SR, latent-based methods utilize a feature encoder to transform the image and then implement the SR image generation in a compact latent space.
no code implementations • 16 Oct 2023 • Jun Zhang, Lipeng Zhu, Chao Wang, Shutao Li
On the other hand, the tensor nuclear norm (TNN)-based approaches have recently demonstrated to be more efficient on keeping high-dimensional low-rank structures in tensor recovery.
no code implementations • 16 Oct 2023 • Kuan Tian, Yonghang Guan, Jinxi Xiang, Jun Zhang, Xiao Han, Wei Yang
Due to the absence of autoregressive modeling and optical flow alignment, we can design an extremely minimalist framework that can greatly benefit computational efficiency.
1 code implementation • 14 Oct 2023 • Shuyang Jiang, Jun Zhang, Jiangtao Feng, Lin Zheng, Lingpeng Kong
Furthermore, we marry AMLP with popular NAR models, deriving a highly efficient NAR-AMLP architecture with linear time and space complexity.
1 code implementation • 10 Oct 2023 • Fei Shen, Hu Ye, Jun Zhang, Cong Wang, Xiao Han, Wei Yang
Specifically, in the first stage, we design a simple prior conditional diffusion model that predicts the global features of the target image by mining the global alignment relationship between pose coordinates and image appearance.
no code implementations • 29 Sep 2023 • Tailin Zhou, Jun Zhang, Danny H. K. Tsang
Empirically, reducing data heterogeneity makes the connectivity on different paths more similar, forming more low-error overlaps between client and global modes.
no code implementations • 23 Sep 2023 • Peiwen Jiang, Chao-Kai Wen, Xinping Yi, Xiao Li, Shi Jin, Jun Zhang
Foundation models (FMs), including large language models, have become increasingly popular due to their wide-ranging applicability and ability to understand human-like semantics.
no code implementations • 20 Sep 2023 • Kuan Tian, Yonghang Guan, Jinxi Xiang, Jun Zhang, Xiao Han, Wei Yang
First, to solve the problem of inconsistency of codec caused by the uncertainty of floating point calculations across platforms, we design a calibration transmitting system to guarantee the consistent quantization of entropy parameters between the encoding and decoding stages.
no code implementations • 18 Sep 2023 • Wentao Yu, Yifan Ma, Hengtao He, Shenghui Song, Jun Zhang, Khaled B. Letaief
Massive multiple-input multiple-output (MIMO) has been a critical enabling technology in 5th generation (5G) wireless networks.
1 code implementation • 15 Sep 2023 • Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, Sharad Mehrotra
This approach is characterized by a two-stage process: drafting and verification.
no code implementations • 14 Sep 2023 • Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Chao Chen, Shirui Pan, Kok-Leong Ong, Jun Zhang, Yang Xiang
For the first time, we show the feasibility of a client-side adversary with limited knowledge being able to recover the training samples from the aggregated global model.
1 code implementation • 2 Sep 2023 • Jun Zhang, Huayang Zhuge, Yiyao Liu, Guohao Peng, Zhenyu Wu, Haoyuan Zhang, Qiyang Lyu, Heshan Li, Chunyang Zhao, Dogan Kircali, Sanat Mharolkar, Xun Yang, Su Yi, Yuanzhe Wang, Danwei Wang
5) Considered both middle- and large- scale outdoor environments, i. e., the 6 trajectories range from 246m to 6. 95km.
no code implementations • 30 Aug 2023 • Zijian Li, Zehong Lin, Jiawei Shao, Yuyi Mao, Jun Zhang
However, devices often have non-independent and identically distributed (non-IID) data, meaning their local data distributions can vary significantly.
no code implementations • 27 Aug 2023 • Chen Shen, Jun Zhang, Xinggong Liang, Zeyi Hao, Kehan Li, Fan Wang, Zhenyuan Wang, Chunfeng Lian
Forensic pathology is critical in analyzing death manner and time from the microscopic aspect to assist in the establishment of reliable factual bases for criminal investigation.
1 code implementation • 15 Aug 2023 • Yue Lv, Jinxi Xiang, Jun Zhang, Wenming Yang, Xiao Han, Wei Yang
We thus introduce a dynamic gating network on top of the low-rank adaptation method, in order to decide which decoder layer should employ adaptation.
3 code implementations • 13 Aug 2023 • Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, Wei Yang
Despite the simplicity of our method, an IP-Adapter with only 22M parameters can achieve comparable or even better performance to a fully fine-tuned image prompt model.
Ranked #2 on Personalized Image Generation on DreamBooth
no code implementations • 12 Aug 2023 • Yongcong Chen, Ting Zeng, Jun Zhang
At present, the mainstream artificial intelligence generally adopts the technical path of "attention mechanism + deep learning" + "reinforcement learning".
no code implementations • 9 Aug 2023 • Zijian Li, Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features.
no code implementations • 7 Aug 2023 • Lumin Liu, Jun Zhang, Shenghui Song, Khaled B. Letaief
To improve communication efficiency and achieve a better privacy-utility trade-off, we propose a communication-efficient FL training algorithm with differential privacy guarantee.
no code implementations • 20 Jul 2023 • Jiawei Shao, Zijian Li, Wenqiang Sun, Tailin Zhou, Yuchang Sun, Lumin Liu, Zehong Lin, Yuyi Mao, Jun Zhang
Without data centralization, FL allows clients to share local information in a privacy-preserving manner.
3 code implementations • 20 Jul 2023 • Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu
Recently, there has been growing interest in extending the context length of large language models (LLMs), aiming to effectively process long inputs of one turn or conversations with more extensive histories.
no code implementations • 6 Jul 2023 • Yifei Shen, Jiawei Shao, Xinjie Zhang, Zehong Lin, Hao Pan, Dongsheng Li, Jun Zhang, Khaled B. Letaief
The evolution of wireless networks gravitates towards connected intelligence, a concept that envisions seamless interconnectivity among humans, objects, and intelligence in a hyper-connected cyber-physical world.
1 code implementation • 21 Jun 2023 • Yuchang Sun, Yuyi Mao, Jun Zhang
Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the model updates need to be collected at a server.
no code implementations • 7 Jun 2023 • Lu Huang, Boyu Li, Jun Zhang, Lu Lu, Zejun Ma
Domain adaptation using text-only corpus is challenging in end-to-end(E2E) speech recognition.
no code implementations • 27 May 2023 • Linhao Dong, Zhecheng An, Peihao Wu, Jun Zhang, Lu Lu, Zejun Ma
We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU, including the dominant speech representation learned from self-supervised pre-training.
no code implementations • 26 May 2023 • Yuchang Sun, Zehong Lin, Yuyi Mao, Shi Jin, Jun Zhang
In this paper, we propose a probabilistic device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise, where each device is scheduled according to a certain probability and its model update is reweighted using this probability in aggregation.
no code implementations • 25 May 2023 • Liheng Bian, Daoyu Li, Shuoguang Wang, Chunyang Teng, Huteng Liu, Hanwen Xu, Xuyang Chang, Guoqiang Zhao, Shiyong Li, Jun Zhang
These elements are then sampled based on the ranking, building the experimentally optimal sparse sampling strategy that reduces the cost of antenna array by up to one order of magnitude.
1 code implementation • 21 May 2023 • Hongru Li, Wentao Yu, Hengtao He, Jiawei Shao, Shenghui Song, Jun Zhang, Khaled B. Letaief
Task-oriented communication is an emerging paradigm for next-generation communication networks, which extracts and transmits task-relevant information, instead of raw data, for downstream applications.
no code implementations • 21 May 2023 • Xinyu Bian, Yuyi Mao, Jun Zhang
Most existing studies on joint activity detection and channel estimation for grant-free massive random access (RA) systems assume perfect synchronization among all active users, which is hard to achieve in practice.
1 code implementation • 13 May 2023 • Tailin Zhou, Zehong Lin, Jun Zhang, Danny H. K. Tsang
To gain further insights into model averaging in FL, we decompose the expected loss of the global model into five factors related to the client models.
no code implementations • 10 May 2023 • Xiaorui Bai, Wenyong Wang, Jun Zhang, Yueqing Wang, Yu Xiang
Flow field segmentation and classification help researchers to understand vortex structure and thus turbulent flow.
no code implementations • 2 May 2023 • Jun Zhang, Xiaohan Lin, Weinan E, Yi Qin Gao
Multiscale molecular modeling is widely applied in scientific research of molecular properties over large time and length scales.
no code implementations • 2 May 2023 • Wenqiang Sun, Sen Li, Yuchang Sun, Jun Zhang
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
no code implementations • 19 Apr 2023 • Jingjin Li, Chao Chen, Lei Pan, Mostafa Rahimi Azghadi, Hossein Ghodosi, Jun Zhang
The privacy issues include technical-wise information stealing and policy-wise privacy breaches.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 18 Apr 2023 • Weiqi Xu, Li Ling, Yiping Xie, Jun Zhang, John Folkesson
In this paper, a canonical transformation method consisting of intensity correction and slant range correction is proposed to decrease the above distortion.
no code implementations • 12 Apr 2023 • Feng-Feng Wei, Wei-neng Chen, Xiao-Qi Guo, Bowen Zhao, Sang-Woon Jeon, Jun Zhang
Inspired by this, this paper intends to introduce crowdsourcing into evolutionary computation (EC) to propose a crowdsourcing-based evolutionary computation (CEC) paradigm for distributed optimization.
no code implementations • 12 Apr 2023 • Xinyu Bian, Yuyi Mao, Jun Zhang
Specifically, by jointly leveraging the user activity correlation between adjacent transmission blocks and the historical channel estimation results, we first develop an activity-correlation-aware receiver for grant-free massive RA systems with retransmission based on the correlated approximate message passing (AMP) algorithm.
no code implementations • 12 Apr 2023 • Wei-neng Chen, Feng-Feng Wei, Tian-Fang Zhao, Kay Chen Tan, Jun Zhang
Based on this taxonomy, existing studies on DEC are reviewed in terms of purpose, parallel structure of the algorithm, parallel model for implementation, and the implementation environment.
1 code implementation • 4 Apr 2023 • Jiawei Shao, Fangzhao Wu, Jun Zhang
While federated learning is promising for privacy-preserving collaborative learning without revealing local data, it remains vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
1 code implementation • CVPR 2023 • Shenyuan Gao, Chunluan Zhou, Jun Zhang
Compared with previous two-stream trackers, the recent one-stream tracking pipeline, which allows earlier interaction between the template and search region, has achieved a remarkable performance gain.
no code implementations • 22 Mar 2023 • Bowen Zhao, Wei-neng Chen, Xiaoguo Li, Ximeng Liu, Qingqi Pei, Jun Zhang
To this end, in this paper, we discuss three typical optimization paradigms (i. e., \textit{centralized optimization, distributed optimization, and data-driven optimization}) to characterize optimization modes of evolutionary computation and propose BOOM to sort out privacy concerns in evolutionary computation.
1 code implementation • 21 Mar 2023 • Xinjie Zhang, Jiawei Shao, Jun Zhang
This has inspired a distributed coding architecture aiming at reducing the encoding complexity.
no code implementations • 16 Mar 2023 • Yupeng Huang, Hong Zhang, Siyuan Jiang, Dajiong Yue, Xiaohan Lin, Jun Zhang, Yi Qin Gao
In this study, we take the advantage of both traditional and machine-learning based methods, and present a method Deep Site and Docking Pose (DSDP) to improve the performance of blind docking.
1 code implementation • 11 Mar 2023 • Simon Graham, Quoc Dang Vu, Mostafa Jahanifar, Martin Weigert, Uwe Schmidt, Wenhua Zhang, Jun Zhang, Sen yang, Jinxi Xiang, Xiyue Wang, Josef Lorenz Rumberger, Elias Baumann, Peter Hirsch, Lihao Liu, Chenyang Hong, Angelica I. Aviles-Rivero, Ayushi Jain, Heeyoung Ahn, Yiyu Hong, Hussam Azzuni, Min Xu, Mohammad Yaqub, Marie-Claire Blache, Benoît Piégu, Bertrand Vernay, Tim Scherr, Moritz Böhland, Katharina Löffler, Jiachen Li, Weiqin Ying, Chixin Wang, Dagmar Kainmueller, Carola-Bibiane Schönlieb, Shuolin Liu, Dhairya Talsania, Yughender Meda, Prakash Mishra, Muhammad Ridzuan, Oliver Neumann, Marcel P. Schilling, Markus Reischl, Ralf Mikut, Banban Huang, Hsiang-Chin Chien, Ching-Ping Wang, Chia-Yen Lee, Hong-Kun Lin, Zaiyi Liu, Xipeng Pan, Chu Han, Jijun Cheng, Muhammad Dawood, Srijay Deshpande, Raja Muhammad Saad Bashir, Adam Shephard, Pedro Costa, João D. Nunes, Aurélio Campilho, Jaime S. Cardoso, Hrishikesh P S, Densen Puthussery, Devika R G, Jiji C V, Ye Zhang, Zijie Fang, Zhifan Lin, Yongbing Zhang, Chunhui Lin, Liukun Zhang, Lijian Mao, Min Wu, Vi Thi-Tuong Vo, Soo-Hyung Kim, Taebum Lee, Satoshi Kondo, Satoshi Kasai, Pranay Dumbhare, Vedant Phuse, Yash Dubey, Ankush Jamthikar, Trinh Thi Le Vuong, Jin Tae Kwak, Dorsa Ziaei, Hyun Jung, Tianyi Miao, David Snead, Shan E Ahmed Raza, Fayyaz Minhas, Nasir M. Rajpoot
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome.
no code implementations • 24 Feb 2023 • Xuefeng Wang, Xinran Li, Jiawei Shao, Jun Zhang
Learning communication strategies in cooperative multi-agent reinforcement learning (MARL) has recently attracted intensive attention.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • 14 Feb 2023 • Hengtao He, Xianghao Yu, Jun Zhang, Shenghui Song, Khaled B. Letaief
As one of the core technologies for 5G systems, massive multiple-input multiple-output (MIMO) introduces dramatic capacity improvements along with very high beamforming and spatial multiplexing gains.
no code implementations • 13 Feb 2023 • Fei Kong, Xiyue Wang, Jinxi Xiang, Sen yang, Xinran Wang, Meng Yue, Jun Zhang, Junhan Zhao, Xiao Han, Yuhan Dong, Biyue Zhu, Fang Wang, Yueping Liu
We assessed the effectiveness of FACL in cancer diagnosis and Gleason grading tasks using 19, 461 whole-slide images of prostate cancer from multiple centers.
1 code implementation • 13 Feb 2023 • Zeqiang Lai, Ying Fu, Jun Zhang
The features of RGB reference images are then processed by a multi-stage alignment module to explicitly align the features of RGB reference with the LR HSI.
1 code implementation • 9 Feb 2023 • Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, Lingpeng Kong
Based on EVALM, we scale up the size of examples efficiently in both instruction tuning and in-context learning to explore the boundary of the benefits from more annotated data.
1 code implementation • 24 Jan 2023 • Xinjie Zhang, Jiawei Shao, Jun Zhang
Multi-view image compression plays a critical role in 3D-related applications.
no code implementations • 12 Jan 2023 • Siteng Chen, Xiyue Wang, Jun Zhang, Liren Jiang, Ning Zhang, Feng Gao, Wei Yang, Jinxi Xiang, Sen yang, Junhua Zheng, Xiao Han
The OSrisk for the prediction of 5-year survival status achieved AUC of 0. 784 (0. 746-0. 819) in the TCGA cohort, which was further verified in the independent General cohort and the CPTAC cohort, with AUC of 0. 774 (0. 723-0. 820) and 0. 702 (0. 632-0. 765), respectively.
no code implementations • 3 Jan 2023 • Yandong Shi, Lixiang Lian, Yuanming Shi, Zixin Wang, Yong Zhou, Liqun Fu, Lin Bai, Jun Zhang, Wei zhang
The sixth generation (6G) wireless systems are envisioned to enable the paradigm shift from "connected things" to "connected intelligence", featured by ultra high density, large-scale, dynamic heterogeneity, diversified functional requirements and machine learning capabilities, which leads to a growing need for highly efficient intelligent algorithms.
no code implementations • 28 Dec 2022 • Liheng Bian, Haoze Song, Lintao Peng, Xuyang Chang, Xi Yang, Roarke Horstmeyer, Lin Ye, Tong Qin, Dezhi Zheng, Jun Zhang
Benefiting from its single-photon sensitivity, single-photon avalanche diode (SPAD) array has been widely applied in various fields such as fluorescence lifetime imaging and quantum computing.
1 code implementation • 4 Dec 2022 • Boxuan Zhao, Jun Zhang, Deheng Ye, Jian Cao, Xiao Han, Qiang Fu, Wei Yang
Most of the existing methods rely on a multiple instance learning framework that requires densely sampling local patches at high magnification.
1 code implementation • 3 Dec 2022 • Jiahao Li, Zhourun Wu, Wenhao Lin, Jiawei Luo, Jun Zhang, Qingcai Chen, Junjie Chen
Although many feature extraction methods have been proposed to improve the performance of enhancer identification, they cannot learn position-related multiscale contextual information from raw DNA sequences.
1 code implementation • 29 Nov 2022 • Wentao Yu, Yifei Shen, Hengtao He, Xianghao Yu, Shenghui Song, Jun Zhang, Khaled B. Letaief
For practical usage, the proposed framework is further extended to wideband THz UM-MIMO systems with beam squint effect.
no code implementations • 28 Nov 2022 • Yifan Ma, Wentao Yu, Xianghao Yu, Jun Zhang, Shenghui Song, Khaled B. Letaief
In this paper, we propose a lightweight and flexible deep learning-based CSI feedback approach by capitalizing on deep equilibrium models.
2 code implementations • 27 Nov 2022 • Zhenhao Shuai, Hongbo Liu, Zhaolin Wan, Wei-Jie Yu, Jun Zhang
One of the key settings in SANE is the search space defined by cells and organs self-adapted to different DNN types.
1 code implementation • 25 Nov 2022 • Jiawei Shao, Xinjie Zhang, Jun Zhang
With the development of artificial intelligence (AI) techniques and the increasing popularity of camera-equipped devices, many edge video analytics applications are emerging, calling for the deployment of computation-intensive AI models at the network edge.
1 code implementation • 17 Nov 2022 • Tailin Zhou, Jun Zhang, Danny H. K. Tsang
This enables client models to be updated in a shared feature space with consistent classifiers during local training.
no code implementations • 15 Nov 2022 • Wentao Yu, Hengtao He, Xianghao Yu, Shenghui Song, Jun Zhang, Khaled B. Letaief
Reliability is of paramount importance for the physical layer of wireless systems due to its decisive impact on end-to-end performance.
no code implementations • 8 Nov 2022 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang
During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.
no code implementations • 27 Oct 2022 • Jun Zhang, Ping Li, Wei Wang
Recent advances in neural networks have been successfully applied to many tasks in online recommendation applications.
1 code implementation • 14 Oct 2022 • Jun Zhang, Shuyang Jiang, Jiangtao Feng, Lin Zheng, Lingpeng Kong
In this paper, we propose Comprehensive Attention Benchmark (CAB) under a fine-grained attention taxonomy with four distinguishable attention patterns, namely, noncausal self, causal self, noncausal cross, and causal cross attentions.
1 code implementation • 7 Oct 2022 • Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.
no code implementations • 6 Oct 2022 • Jiawei Shao, Yuchang Sun, Songze Li, Jun Zhang
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
7 code implementations • 5 Oct 2022 • Silvio Giancola, Anthony Cioppa, Adrien Deliège, Floriane Magera, Vladimir Somers, Le Kang, Xin Zhou, Olivier Barnich, Christophe De Vleeschouwer, Alexandre Alahi, Bernard Ghanem, Marc Van Droogenbroeck, Abdulrahman Darwish, Adrien Maglo, Albert Clapés, Andreas Luyts, Andrei Boiarov, Artur Xarles, Astrid Orcesi, Avijit Shah, Baoyu Fan, Bharath Comandur, Chen Chen, Chen Zhang, Chen Zhao, Chengzhi Lin, Cheuk-Yiu Chan, Chun Chuen Hui, Dengjie Li, Fan Yang, Fan Liang, Fang Da, Feng Yan, Fufu Yu, Guanshuo Wang, H. Anthony Chan, He Zhu, Hongwei Kan, Jiaming Chu, Jianming Hu, Jianyang Gu, Jin Chen, João V. B. Soares, Jonas Theiner, Jorge De Corte, José Henrique Brito, Jun Zhang, Junjie Li, Junwei Liang, Leqi Shen, Lin Ma, Lingchi Chen, Miguel Santos Marques, Mike Azatov, Nikita Kasatkin, Ning Wang, Qiong Jia, Quoc Cuong Pham, Ralph Ewerth, Ran Song, RenGang Li, Rikke Gade, Ruben Debien, Runze Zhang, Sangrok Lee, Sergio Escalera, Shan Jiang, Shigeyuki Odashima, Shimin Chen, Shoichi Masui, Shouhong Ding, Sin-wai Chan, Siyu Chen, Tallal El-Shabrawy, Tao He, Thomas B. Moeslund, Wan-Chi Siu, Wei zhang, Wei Li, Xiangwei Wang, Xiao Tan, Xiaochuan Li, Xiaolin Wei, Xiaoqing Ye, Xing Liu, Xinying Wang, Yandong Guo, YaQian Zhao, Yi Yu, YingYing Li, Yue He, Yujie Zhong, Zhenhua Guo, Zhiheng Li
The SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team.
no code implementations • 27 Sep 2022 • Chengzhi Lin, AnCong Wu, Junwei Liang, Jun Zhang, Wenhang Ge, Wei-Shi Zheng, Chunhua Shen
To address this problem, we propose a Text-Adaptive Multiple Visual Prototype Matching model, which automatically captures multiple prototypes to describe a video by adaptive aggregation of video token features.
1 code implementation • 26 Sep 2022 • Junwei Liang, Enwei Zhang, Jun Zhang, Chunhua Shen
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition.