1 code implementation • 3 Dec 2024 • Yifan Jiao, Yunhao Li, Junhua Ding, Qing Yang, Song Fu, Heng Fan, Libo Zhang
In this paper, we present a novel benchmark, GSOT3D, that aims at facilitating development of generic 3D single object tracking (SOT) in the wild.
no code implementations • 17 Oct 2024 • Lei Huang, Xiaocheng Feng, Weitao Ma, Liang Zhao, Yuchun Fan, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin
Teaching large language models (LLMs) to generate text with citations to evidence sources can mitigate hallucinations and enhance verifiability in information-seeking systems.
1 code implementation • 5 Oct 2024 • Yangfan Ye, Xiachong Feng, Xiaocheng Feng, Weitao Ma, Libo Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin
News summarization in today's global scene can be daunting with its flood of multilingual content and varied viewpoints from different sources.
no code implementations • 2 Oct 2024 • Yingsheng Wu, Yuxuan Gu, Xiaocheng Feng, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin
However, existing scaling methods often rely on empirical approaches and lack a profound understanding of the internal distribution within RoPE, resulting in suboptimal performance in extending the context window length.
1 code implementation • 20 Sep 2024 • Yuxin Wang, Minghua Ma, Zekun Wang, Jingchang Chen, Huiming Fan, Liping Shan, Qing Yang, Dongliang Xu, Ming Liu, Bing Qin
To this end, we introduce an efficient structured pruning framework named CFSP, which leverages both Coarse (interblock) and Fine-grained (intrablock) activation information as an importance criterion to guide pruning.
no code implementations • 14 Sep 2024 • Yitian Tao, Yan Liang, Luoyu Wang, Yongqing Li, Qing Yang, Han Zhang
Decoding neurophysiological signals into language is of great research interest within brain-computer interface (BCI) applications.
no code implementations • 27 Aug 2024 • Deyuan Qu, Qi Chen, Yongqi Zhu, Yihao Zhu, Sergei S. Avedisov, Song Fu, Qing Yang
In cooperative perception studies, there is often a trade-off between communication bandwidth and perception performance.
no code implementations • 26 Aug 2024 • Mohammad Dehghani Tezerjani, Mohammad Khoshnazar, Mohammadhamed Tangestanizadeh, Arman Kiani, Qing Yang
The emergence of mobile robotics, particularly in the automotive industry, introduces a promising era of enriched user experiences and adept handling of complex navigation challenges.
2 code implementations • 16 Aug 2024 • Xianzhen Luo, YiXuan Wang, Qingfu Zhu, Zhiming Zhang, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
New candidate tokens from the decoding process are then used to update the matrix.
no code implementations • 10 Jul 2024 • Hongtao Liu, Qiyao Peng, Qing Yang, Kai Liu, Hongyan Xu
Large language models (LLMs) have demonstrated exceptional performance across various natural language processing tasks.
no code implementations • 10 Jul 2024 • Qiyao Peng, Hongtao Liu, Hongyan Xu, Qing Yang, Minglai Shao, Wenjun Wang
Finally, we feed the prompt text into LLMs, and use Supervised Fine-Tuning (SFT) to make the model generate personalized reviews for the given user and target item.
1 code implementation • 4 Jul 2024 • Bojian Jiang, Yi Jing, Tianhao Shen, Tong Wu, Qing Yang, Deyi Xiong
To address this gap, we propose Automated Progressive Red Teaming (APRT) as an effectively learnable framework.
no code implementations • 25 Jun 2024 • YiXuan Wang, Xianzhen Luo, Fuxuan Wei, Yijun Liu, Qingfu Zhu, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
To address this problem, we propose the Make Some Noise (MSN) training framework as a replacement for the supervised fine-tuning stage of the large language model.
no code implementations • 27 May 2024 • Shaohua Dong, Yunhe Feng, Qing Yang, Yuewei Lin, Heng Fan
In this paper, we aim to mitigate such information loss to boost the performance of the low-resolution Transformer tracking via dual knowledge distillation from a frozen high-resolution (but not a larger) Transformer tracker.
Ranked #25 on Visual Object Tracking on LaSOT
1 code implementation • 23 May 2024 • Yanrui Du, Sendong Zhao, Danyang Zhao, Ming Ma, Yuhan Chen, Liangyu Huo, Qing Yang, Dongliang Xu, Bing Qin
When encountering malicious instructions, the router will assign a higher weight to the safe LLM to ensure that responses are harmless.
1 code implementation • WWW '24: Proceedings of the ACM on Web Conference 2024 2024 • Huaiwen Zhang, Xinxin Liu, Qing Yang, Yang Yang, Fan Qi, Shengsheng Qian, Changsheng Xu
To tackle this challenge, we introduce the Test-Time Training for Rumor Detection (T^3RD) to enhance the performance of rumor detection models on low-resource datasets.
1 code implementation • CVPR 2024 • Wenyi Mo, Tianyu Zhang, Yalong Bai, Bing Su, Ji-Rong Wen, Qing Yang
Users assign weights or alter the injection time steps of certain words in the text prompts to improve the quality of generated images.
no code implementations • 29 Mar 2024 • Luoyu Wang, Yitian Tao, Qing Yang, Yan Liang, Siwei Liu, Hongcheng Shi, Dinggang Shen, Han Zhang
To fully exploit the inherent complex and nonlinear relation among modalities while producing fine-grained representations for uni-modal inference, we subsequently add a modal alignment module to line up a dominant modality (e. g., PET) with representations of auxiliary modalities (MR).
1 code implementation • 14 Mar 2024 • Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao
The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts.
no code implementations • 13 Mar 2024 • Wenjing Zhu, Sining Sun, Changhao Shan, Peng Fan, Qing Yang
Conformer-based attention models have become the de facto backbone model for Automatic Speech Recognition tasks.
no code implementations • 4 Mar 2024 • Xin Lu, Yanyan Zhao, Bing Qin, Liangyu Huo, Qing Yang, Dongliang Xu
Through analysis, we found the contribution ratio of Multi-Head Attention (a combination function) to pre-trained language modeling is a key factor affecting base capabilities.
no code implementations • 1 Mar 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).
no code implementations • 21 Feb 2024 • Lianghu Guo, Tianli Tao, Xinyi Cai, Zihao Zhu, Jiawei Huang, Lixuan Zhu, Zhuoyang Gu, Haifeng Tang, Rui Zhou, Siyan Han, Yan Liang, Qing Yang, Dinggang Shen, Han Zhang
Early infancy is a rapid and dynamic neurodevelopmental period for behavior and neurocognition.
1 code implementation • 16 Feb 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models.
no code implementations • 25 Jan 2024 • Mohan Zhou, Yalong Bai, Qing Yang, Tiejun Zhao
The ability to fine-tune generative models for text-to-image generation tasks is crucial, particularly facing the complexity involved in accurately interpreting and visualizing textual inputs.
no code implementations • 16 Jan 2024 • Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL.
no code implementations • CVPR 2024 • Guiwei Zhang, Tianyu Zhang, Guanglin Niu, Zichang Tan, Yalong Bai, Qing Yang
Second to enhance motion coherence and extend the generalization of appearance content to creative textual prompts we propose the causal motion-enhanced attention mechanism.
no code implementations • 28 Dec 2023 • Liang Zhao, Xiachong Feng, Xiaocheng Feng, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu
Built upon the Transformer, large language models (LLMs) have captured worldwide attention due to their remarkable abilities.
no code implementations • 26 Dec 2023 • Sudip Dhakal, Dominic Carrillo, Deyuan Qu, Michael Nutt, Qing Yang, Song Fu
In recent times, there has been a notable surge in multimodal approaches that decorates raw LiDAR point clouds with camera-derived features to improve object detection performance.
no code implementations • 16 Dec 2023 • Zhaoxi Mu, Xinyu Yang, Sining Sun, Qing Yang
However, in the task of target speech extraction, certain elements of global and local semantic information in the reference speech, which are irrelevant to speaker identity, can lead to speaker confusion within the speech extraction network.
1 code implementation • 8 Dec 2023 • Deyuan Qu, Qi Chen, Tianyu Bai, HongSheng Lu, Heng Fan, Hao Zhang, Song Fu, Qing Yang
Cooperative perception for connected and automated vehicles is traditionally achieved through the fusion of feature maps from two or more vehicles.
1 code implementation • 1 Dec 2023 • Shaohua Dong, Yunhe Feng, Qing Yang, Yan Huang, Dongfang Liu, Heng Fan
Existing approaches often fully fine-tune a dual-branch encoder-decoder framework with a complicated feature fusion strategy for achieving multimodal semantic segmentation, which is training-costly due to the massive parameter updates in feature extraction and fusion.
Ranked #4 on Semantic Segmentation on SUN-RGBD (using extra training data)
1 code implementation • 29 Nov 2023 • Xiangyu Meng, Xue Li, Qing Yang, Huanhuan Dai, Lian Qiao, Hongzhen Ding, Long Hao, Xun Wang
According to the survival analysis results on 14 cancer types, Gene-MOE outperformed state-of-the-art models on 12 cancer types.
1 code implementation • 23 Oct 2023 • Peng Fan, Changhao Shan, Sining Sun, Qing Yang, Jianwei Zhang
Following the initial encoder, we introduce an intermediate CTC loss function to compute the label frame, enabling us to extract the key frames and blank frames for KFSA.
no code implementations • 18 Aug 2023 • Peiyuan Si, Jun Zhao, Kwok-Yan Lam, Qing Yang
In this paper, we aim to explore the use of uplink semantic communications with the assistance of UAV in order to improve data collection effiicency for metaverse users in remote areas.
no code implementations • 19 Jul 2023 • Zhigang Chang, Weitai Hu, Qing Yang, Shibao Zheng
In dyadic speaker-listener interactions, the listener's head reactions along with the speaker's head movements, constitute an important non-verbal semantic expression together.
no code implementations • 24 May 2023 • Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Haichao Zhu, Jiafeng Liang, Liping Shan, Ming Liu, Dongliang Xu, Qing Yang, Bing Qin
Despite achieving remarkable performance on various vision-language tasks, Transformer-based Vision-Language Models (VLMs) suffer from redundancy in inputs and parameters, significantly hampering their efficiency in real-world applications.
1 code implementation • 23 May 2023 • Xuanyu Zhang, Bingbing Li, Qing Yang
Generative chat models, such as ChatGPT and GPT-4, have revolutionized natural language generation (NLG) by incorporating instructions and human feedback to achieve significant performance improvements.
1 code implementation • 19 May 2023 • Xuanyu Zhang, Qing Yang
Large-scale language models like ChatGPT and GPT-4 have gained attention for their impressive conversational and generative capabilities.
1 code implementation • 19 May 2023 • Xuanyu Zhang, Qing Yang, Dongliang Xu
In recent years, pre-trained language models have undergone rapid development with the emergence of large-scale models.
no code implementations • 4 May 2023 • Hang Chen, Xinyu Yang, Qing Yang
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data under latent confounding.
1 code implementation • 25 Apr 2023 • Guangyuan Ma, Hongtao Liu, Xing Wu, Wanhui Qian, Zhepeng Lv, Qing Yang, Songlin Hu
Firstly, we introduce the user behavior masking pre-training task to recover the masked user behaviors based on their contextual behaviors.
no code implementations • ICCV 2023 • Xinran Liu, Xiaoqiong Liu, Ziruo Yi, Xin Zhou, Thanh Le, Libo Zhang, Yan Huang, Qing Yang, Heng Fan
In addition, we further derive a variant named PlanarTrack$_{\mathbf{BB}}$ for generic object tracking from PlanarTrack.
no code implementations • 4 Jan 2023 • Peiyuan Si, Wenhan Yu, Jun Zhao, Kwok-Yan Lam, Qing Yang
A huge amount of data in physical world needs to be synchronized to the virtual world to provide immersive experience for users, and there will be higher requirements on coverage to include more users into Metaverse.
no code implementations • 30 Nov 2022 • Shaohuai Shi, Qing Yang, Yang Xiang, Shuhan Qi, Xuan Wang
To enable the pre-trained models to be fine-tuned with local data on edge devices without sharing data with the cloud, we design an efficient split fine-tuning (SFT) framework for edge and cloud collaborative learning.
1 code implementation • 13 Jun 2022 • Jiawei Liu, Kaiyu Zhang, Weitai Hu, Qing Yang
To address this problem, we propose a step-by-step training super-net scheme from one-shot NAS to few-shot NAS.
no code implementations • 12 Jun 2022 • Yanjie Song, Luona Wei, Qing Yang, Jian Wu, Lining Xing, Yingwu Chen
In this way, the search information can be effectively used by the reinforcement learning method.
1 code implementation • 9 Jun 2022 • Jinkun Cao, Ruiqian Nai, Qing Yang, Jialei Huang, Yang Gao
In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically.
no code implementations • 18 Apr 2022 • Xuanyu Zhang, Qing Yang, Dongliang Xu
Knowledge graph embedding (KGE) aims to learn continuous vectors of relations and entities in knowledge graph.
Ranked #8 on Link Property Prediction on ogbl-wikikg2
1 code implementation • 1 Feb 2022 • Yu Zhao, Shaopeng Wei, Yu Guo, Qing Yang, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
This study for the first time considers both types of risk and their joint effects in bankruptcy prediction.
1 code implementation • 16 Nov 2021 • Haili Wang, Jingda Guo, Xu Ma, Song Fu, Qing Yang, Yunzhong Xu
As a distinct advantage of our framework, cloud system administrators only need to check a small number of detected anomalies, and their decisions are leveraged to update the detector.
no code implementations • 11 Oct 2021 • Qing Yang, Yaping Zhao
Aiming at high-dimensional (HD) data acquisition and analysis, snapshot compressive imaging (SCI) obtains the 2D compressed measurement of HD data with optical imaging systems and reconstructs HD data using compressive sensing algorithms.
no code implementations • 29 Sep 2021 • Jinkun Cao, Qing Yang, Jialei Huang, Yang Gao
In this paper, we explored the possibility of using contrastive methods to learn a disentangled representation, a discriminative approach that is drastically different from previous approaches.
no code implementations • 29 Jun 2021 • Qing Wu, Yuwei Li, Lan Xu, Ruiming Feng, Hongjiang Wei, Qing Yang, Boliang Yu, Xiaozhao Liu, Jingyi Yu, Yuyao Zhang
For collecting high-quality high-resolution (HR) MR image, we propose a novel image reconstruction network named IREM, which is trained on multiple low-resolution (LR) MR images and achieve an arbitrary up-sampling rate for HR image reconstruction.
no code implementations • 1 May 2021 • Qing Yang, Hao Wang
This work develops a novel distributed method for the residential transactive energy system that enables multiple users to interactively optimize their energy management of HVAC systems and behind-the-meter batteries.
no code implementations • 1 May 2021 • Qing Yang, Hao Wang, Taotao Wang, Shengli Zhang, Xiaoxiao Wu, Hui Wang
In this paper, we develop a blockchain-based VPP energy management platform to facilitate a rich set of transactive energy activities among residential users with renewables, energy storage, and flexible loads in a VPP.
no code implementations • 11 Jan 2021 • Qing Yang, Hao Wang
Further, we design an efficient blockchain system tailored for IoT devices and develop the smart contract to support the holistic transactive energy management system.
no code implementations • 1 Nov 2020 • Qing Yang, Zhenning Hong, Ruyan Tian, Tingting Ye, Liangliang Zhang
In this paper, we document a novel machine learning based bottom-up approach for static and dynamic portfolio optimization on, potentially, a large number of assets.
no code implementations • 26 Oct 2020 • Qing Yang, Hao Wang
However, how to scale up HVAC energy management for a group of users while persevering users' privacy remains a big challenge.
no code implementations • 24 Sep 2020 • Jingda Guo, Dominic Carrillo, Sihai Tang, Qi Chen, Qing Yang, Song Fu, Xi Wang, Nannan Wang, Paparao Palacharla
To reduce the amount of transmitted data, feature map based fusion is recently proposed as a practical solution to cooperative 3D object detection by autonomous vehicles.
no code implementations • 9 Jul 2020 • Xu Ma, Jingda Guo, Sihai Tang, Zhinan Qiao, Qi Chen, Qing Yang, Song Fu
With DCANet, all attention blocks in a CNN model are trained jointly, which improves the ability of attention learning.
1 code implementation • 4 Jun 2020 • Qing Yang, Xia Zhu, Jong-Kae Fwu, Yun Ye, Ganmei You, Yuan Zhu
Deep neural networks (DNNs) have recently been applied and used in many advanced and diverse tasks, such as medical diagnosis, automatic driving, etc.
1 code implementation • 24 Apr 2020 • Qing Yang, Xia Zhu, Jong-Kae Fwu, Yun Ye, Ganmei You, Yuan Zhu
Face anti-spoofing has become an increasingly important and critical security feature for authentication systems, due to rampant and easily launchable presentation attacks.
1 code implementation • 10 Mar 2020 • Yun Ye, Ganmei You, Jong-Kae Fwu, Xia Zhu, Qing Yang, Yuan Zhu
By using OT, most negligible or unimportant channels are pruned to achieve high sparsity while minimizing performance degradation.
no code implementations • 13 Sep 2019 • Qing Yang, Jiachen Mao, Zuoguan Wang, Hai Li
In addition to conventional compression techniques, e. g., weight pruning and quantization, removing unimportant activations can reduce the amount of data communication and the computation cost.
no code implementations • 19 Jun 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Hai Li
With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency.
1 code implementation • 13 May 2019 • Qi Chen, Sihai Tang, Qing Yang, Song Fu
A point cloud based 3D object detection method is proposed to work on a diversity of aligned point clouds.
Ranked #3 on 3D Object Detection on OPV2V
no code implementations • ICLR 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li
With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.