no code implementations • 8 Jan 2025 • Han Huang, Yulun Wu, Chao Deng, Ge Gao, Ming Gu, Yu-Shen Liu
Recently, Gaussian Splatting has sparked a new trend in the field of computer vision.
no code implementations • 2 Jan 2025 • Yulun Wu, Han Huang, Wenyuan Zhang, Chao Deng, Ge Gao, Ming Gu, Yu-Shen Liu
Specifically, we investigate the impact of monocular priors on sparse scene reconstruction, introducing a novel prior based on inter-image matching information.
1 code implementation • 24 Dec 2024 • Chao Deng, Jiale Yuan, Pi Bu, Peijie Wang, Zhong-Zhi Li, Jian Xu, Xiao-Hui Li, Yuan Gao, Jun Song, Bo Zheng, Cheng-Lin Liu
Large vision language models (LVLMs) have improved the document understanding capabilities remarkably, enabling the handling of complex document elements, longer contexts, and a wider range of tasks.
1 code implementation • 20 Dec 2024 • Sunbowen Lee, Hongqin Lyu, Yicheng Gong, Yingying Sun, Chao Deng
Reinforcement learning methods have proposed promising traffic signal control policy that can be trained on large road networks.
1 code implementation • 15 Dec 2024 • Yulin Wang, Haoji Zhang, Yang Yue, Shiji Song, Chao Deng, Junlan Feng, Gao Huang
This paper presents a comprehensive exploration of the phenomenon of data redundancy in video understanding, with the aim to improve computational efficiency.
2 code implementations • 21 Aug 2024 • Hao Zhou, Zhijun Wang, ShuJian Huang, Xin Huang, Xue Han, Junlan Feng, Chao Deng, Weihua Luo, Jiajun Chen
Then, the model reviews the knowledge of the original languages with replay data amounting to less than 1% of post-pretraining, where we incorporate language priors routing to better recover the abilities of the original languages.
1 code implementation • 24 Jun 2024 • Peng Hu, Sizhe Liu, Changjiang Gao, Xin Huang, Xue Han, Junlan Feng, Chao Deng, ShuJian Huang
However, the relationship between capabilities in different languages is less explored.
no code implementations • 12 Jun 2024 • Runyan Yang, Huibao Yang, Xiqing Zhang, Tiantian Ye, Ying Liu, Yingying Gao, Shilei Zhang, Chao Deng, Junlan Feng
Recently, there have been attempts to integrate various speech processing tasks into a unified model.
no code implementations • 12 Jun 2024 • Yingying Gao, Shilei Zhang, Chao Deng, Junlan Feng
Pre-trained speech language models such as HuBERT and WavLM leverage unlabeled speech data for self-supervised learning and offer powerful representations for numerous downstream tasks.
2 code implementations • 22 May 2024 • Shimao Zhang, Changjiang Gao, Wenhao Zhu, Jiajun Chen, Xin Huang, Xue Han, Junlan Feng, Chao Deng, ShuJian Huang
Recently, Large Language Models (LLMs) have shown impressive language capabilities.
no code implementations • 5 Mar 2024 • Ce Chi, Xing Wang, Kexin Yang, Zhiyan Song, Di Jin, Lin Zhu, Chao Deng, Junlan Feng
A channel identifier, a global mixing module and a self-contextual attention module are devised in InjectTST.
no code implementations • 20 Feb 2024 • Yanan Chen, Zihao Cui, Yingying Gao, Junlan Feng, Chao Deng, Shilei Zhang
In this study, we present a novel weighting prediction approach, which explicitly learns the task relationships from downstream training information to address the core challenge of universal speech enhancement.
no code implementations • 1 Jan 2024 • Ruizhuo Xu, Ke Wang, Chao Deng, Mei Wang, Xi Chen, Wenhui Huang, Junlan Feng, Weihong Deng
With the increasing availability of consumer depth sensors, 3D face recognition (FR) has attracted more and more attention.
1 code implementation • 17 Nov 2023 • Shenghao Yang, Chenyang Wang, Yankai Liu, Kangping Xu, Weizhi Ma, Yiqun Liu, Min Zhang, Haitao Zeng, Junlan Feng, Chao Deng
In this paper, we propose CoWPiRec, an approach of Collaborative Word-based Pre-trained item representation for Recommendation.
no code implementations • 23 Oct 2023 • Yingying Gao, Shilei Zhang, Zihao Cui, Chao Deng, Junlan Feng
Cascading multiple pre-trained models is an effective way to compose an end-to-end system.
no code implementations • 20 Oct 2023 • Yingying Gao, Shilei Zhang, Zihao Cui, Yanhan Xu, Chao Deng, Junlan Feng
Self-supervised pre-trained models such as HuBERT and WavLM leverage unlabeled speech data for representation learning and offer significantly improve for numerous downstream tasks.
1 code implementation • 1 Sep 2023 • Yifan Pu, Yizeng Han, Yulin Wang, Junlan Feng, Chao Deng, Gao Huang
Since images belonging to the same meta-category usually share similar visual appearances, mining discriminative visual cues is the key to distinguishing fine-grained categories.
1 code implementation • 22 Aug 2023 • Lixiong Qin, Mei Wang, Chao Deng, Ke Wang, Xi Chen, Jiani Hu, Weihong Deng
To address the conflicts among multiple tasks and meet the different demands of tasks, a Multi-Level Channel Attention (MLCA) module is integrated into each task-specific analysis subnet, which can adaptively select the features from optimal levels and channels to perform the desired tasks.
1 code implementation • ICCV 2023 • Yizeng Han, Dongchen Han, Zeyu Liu, Yulin Wang, Xuran Pan, Yifan Pu, Chao Deng, Junlan Feng, Shiji Song, Gao Huang
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
no code implementations • 12 Jun 2023 • Xing Wang, Zhendong Wang, Kexin Yang, Junlan Feng, Zhiyan Song, Chao Deng, Lin Zhu
To capture the intrinsic patterns of time series, we propose a novel deep learning network architecture, named Multi-resolution Periodic Pattern Network (MPPN), for long-term series forecasting.
no code implementations • 9 Mar 2023 • Jie Liu, Yixuan Liu, Xue Han, Chao Deng, Junlan Feng
Previous contrastive learning methods for sentence representations often focus on insensitive transformations to produce positive pairs, but neglect the role of sensitive transformations that are harmful to semantic representations.
1 code implementation • 28 Feb 2023 • Xing Wang, Kexin Yang, Zhendong Wang, Junlan Feng, Lin Zhu, Juan Zhao, Chao Deng
First, we apply adaptive hybrid graph learning to learn the compound spatial correlations among cell towers.
no code implementations • 26 Jan 2023 • Runze Lei, Pinghui Wang, Junzhou Zhao, Lin Lan, Jing Tao, Chao Deng, Junlan Feng, Xidian Wang, Xiaohong Guan
In this work, we propose a novel FL framework for graph data, FedCog, to efficiently handle coupled graphs that are a kind of distributed graph data, but widely exist in a variety of real-world applications such as mobile carriers' communication networks and banks' transaction networks.
1 code implementation • 17 Sep 2022 • Yizeng Han, Yifan Pu, Zihang Lai, Chaofei Wang, Shiji Song, Junfen Cao, Wenhui Huang, Chao Deng, Gao Huang
Intuitively, easy samples, which generally exit early in the network during inference, should contribute more to training early classifiers.
no code implementations • 26 Jun 2022 • Yingying Gao, Junlan Feng, Chao Deng, Shilei Zhang
Spoken language understanding (SLU) treats automatic speech recognition (ASR) and natural language understanding (NLU) as a unified task and usually suffers from data scarcity.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 16 Jun 2022 • Yingying Gao, Junlan Feng, Tianrui Wang, Chao Deng, Shilei Zhang
Analysis shows that our proposed approach brings a better uniformity for the trained model and enlarges the CTC spikes obviously.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 12 Jun 2022 • Lijun Gou, Jinrong Yang, Hangcheng Yu, Pan Wang, Xiaoping Li, Chao Deng
Then, a Semantic Consistency Feature Alignment Model (SCFAM) based on mixed-classes $H-divergence$ was also presented.
no code implementations • 19 Apr 2022 • Zhuoran Li, Xing Wang, Ling Pan, Lin Zhu, Zhendong Wang, Junlan Feng, Chao Deng, Longbo Huang
A2C-GS consists of three novel components, including a verifier to validate the correctness of a generated network topology, a graph neural network (GNN) to efficiently approximate topology rating, and a DRL actor layer to conduct a topology search.
no code implementations • 4 Mar 2022 • Peng Li, Jiayin Zhao, Jingyao Wu, Chao Deng, Haoqian Wang, Tao Yu
Light field disparity estimation is an essential task in computer vision with various applications.
no code implementations • 1 Nov 2021 • Xing Wang, Juan Zhao, Lin Zhu, Xu Zhou, Zhao Li, Junlan Feng, Chao Deng, Yong Zhang
AMF-STGCN extends GCN by (1) jointly modeling the complex spatial-temporal dependencies in mobile networks, (2) applying attention mechanisms to capture various Receptive Fields of heterogeneous base stations, and (3) introducing an extra decoder based on a fully connected deep network to conquer the error propagation challenge with multi-step forecasting.
1 code implementation • 30 Jun 2021 • Zhihong Zhang, Chao Deng, Yang Liu, Xin Yuan, Jinli Suo, Qionghai Dai
Towards this end, snapshot compressive imaging (SCI) was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction.
1 code implementation • 19 Mar 2021 • Lijun Gou, Shengkai Wu, Jinrong Yang, Hangcheng Yu, Chenxi Lin, Xiaoping Li, Chao Deng
To solve this problem, a novel image synthesis method is proposed to replace the foreground texture of the source datasets with the texture of the target datasets.
no code implementations • PKKDD 2020 • Chao Deng, Hao Wang, Qing Tan, Jian Xu, and Kun Gai
Due to the sparsity and latency of the user response behaviors such as clicks and conversions, traditional calibration methods may not work well in real-world online advertising systems.
no code implementations • 1 Jan 2021 • Xiaolei Hua, Su Wang, Lin Zhu, Dong Zhou, Junlan Feng, Yiting Wang, Chao Deng, Shuo Wang, Mingtao Mei
However, due to complex correlations and various temporal patterns of large-scale multivariate time series, a general unsupervised anomaly detection model with higher F1-score and Timeliness remains a challenging task.
no code implementations • 1 Jan 2021 • Xing Wang, Lin Zhu, Juan Zhao, Zhou Xu, Zhao Li, Junlan Feng, Chao Deng
Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management.
no code implementations • 23 Jul 2020 • Chao Deng, Xizhi Su, Chao Zhou
We observe that the agent with the most accurate prior estimate is likely to lead the herd, and the effect of competition on heterogeneous agents varies more with market characteristics compared to the homogeneous case.
1 code implementation • 28 Mar 2019 • Jindou Wu, Yunlun Yang, Chao Deng, Hongyi Tang, Bingning Wang, Haoze Sun, Ting Yao, Qi Zhang
In this paper, we present a Sogou Machine Reading Comprehension (SMRC) toolkit that can be used to provide the fast and efficient development of modern machine comprehension models, including both published models and original prototypes.
no code implementations • 8 Mar 2016 • Weidi Xu, Haoze Sun, Chao Deng, Ying Tan
Although semi-supervised variational autoencoder (SemiVAE) works in image classification task, it fails in text classification task if using vanilla LSTM as its decoder.