1 code implementation • 8 Mar 2025 • Runze Zhang, Guoguang Du, Xiaochuan Li, Qi Jia, Liang Jin, Lu Liu, Jingjing Wang, Cong Xu, Zhenhua Guo, YaQian Zhao, Xiaoli Gong, RenGang Li, Baoyu Fan
Prior research, especially in open-source projects, primarily focuses on either temporal or spatial consistency, or their basic combination, such as appending a description of a camera movement after a prompt without constraining the outcomes of this movement.
1 code implementation • 12 Feb 2025 • Yunhang He, Cong Xu, Jun Wang, Wei zhang
However, when graph-structured side information (e. g., multimodal similarity graphs or social networks) is integrated into the U-I bipartite graph, existing graph collaborative filtering methods fall short of achieving satisfactory performance.
1 code implementation • 16 Dec 2024 • Cong Xu, Yunhang He, Jun Wang, Wei zhang
In order to combine the two distinct types of information, some additional challenges are encountered: 1) Modality erasure: Vanilla graph convolution, which proves rather useful in collaborative filtering, however erases multimodal information; 2) Modality forgetting: Multimodal information tends to be gradually forgotten as the recommendation loss essentially facilitates the learning of collaborative information.
no code implementations • 29 Nov 2024 • Shaowen Wang, AnAn Liu, Jian Xiao, Huan Liu, Yuekui Yang, Cong Xu, Qianqian Pu, Suncong Zheng, Wei zhang, Jian Li
Modern recommendation systems frequently employ online learning to dynamically update their models with freshly collected data.
no code implementations • 18 Sep 2024 • Peng Liu, Jiawei Zhu, Cong Xu, Ming Zhao, Bin Wang
However, limited by their modeling pattern, all the current RL-MTF methods can only utilize user features as the state to generate actions for each user, but unable to make use of item features and other valuable features, which leads to suboptimal results.
1 code implementation • 26 Aug 2024 • Cong Xu, Zhangchi Zhu, Mo Yu, Jun Wang, Jianyong Wang, Wei zhang
Some studies have observed that LLMs, when fine-tuned by the cross-entropy (CE) loss with a full softmax, could achieve `state-of-the-art' performance in sequential recommendation.
no code implementations • 21 Jun 2024 • Cong Xu, Gayathri Saranathan, Mahammad Parwez Alam, Arpit Shah, James Lim, Soon Yee Wong, Foltin Martin, Suparna Bhattacharya
Empirical analysis across six NLP benchmarks reveals that: (1) quality-based sampling consistently achieves strong correlations (0. 85 to 0. 95) with full datasets at a 10\% sampling rate such as Quality SE and Quality CPD (2) clustering methods excel in specific benchmarks such as MMLU (3) no single method universally outperforms others across all metrics.
no code implementations • 24 May 2024 • Wenquan Dong, Edward T. A. Mitchard, Yuwei Chen, Man Chen, Congfeng Cao, Peilun Hu, Cong Xu, Steven Hancock
We then applied LightGBM and random forest regression to generate wall-to-wall AGB maps at 25 m resolution, using extensive GEDI footprints as well as Sentinel-1 data, ALOS-2 PALSAR-2 and Sentinel-2 optical data.
no code implementations • 21 May 2024 • Peng Liu, Nian Wang, Cong Xu, Ming Zhao, Bin Wang, Yi Ren
UIE enhances user interest including user profile and user history behavior sequences by leveraging the enhancement vectors and personalized enhancement vectors generated based on dynamic streaming clustering of similar users and items from multiple perspectives, which are stored and updated in memory networks.
no code implementations • 15 May 2024 • Qi Jia, Baoyu Fan, Cong Xu, Lu Liu, Liang Jin, Guoguang Du, Zhenhua Guo, YaQian Zhao, Xuanjing Huang, RenGang Li
In light of this, we introduces a novel research task, Multi-modal Sentiment Analysis for Comment Response of Video Induced(MSA-CRVI), aims to inferring opinions and emotions according to the comments response to micro video.
no code implementations • 19 Apr 2024 • Peng Liu, Cong Xu, Ming Zhao, Jiawei Zhu, Bin Wang, Yi Ren
IntegratedRL-MTF integrates offline RL model with our online exploration policy to relax overstrict and complicated constraints, which significantly improves its performance.
no code implementations • 18 Feb 2024 • Kun Ma, Cong Xu, Zeyuan Chen, Wei zhang
It breaks the sequence of items into multi-level patterns that serve as atomic units throughout the recommendation process.
no code implementations • 9 Feb 2024 • Cong Xu, Zhangchi Zhu, Jun Wang, Jianyong Wang, Wei zhang
Large language models (LLMs) have gained much attention in the recommendation community; some studies have observed that LLMs, fine-tuned by the cross-entropy loss with a full softmax, could achieve state-of-the-art performance already.
1 code implementation • NeurIPS 2024 2024 • Guagnyu Wang, Wenchao Liu, Yuhong He, Cong Xu, Lin Ma, Haifeng Li
However, challenges such as low signal-to-noise ratio (SNR), high inter-subject variability, and channel mismatch complicate the extraction of robust, universal EEG representations.
1 code implementation • 6 Nov 2023 • Hao Zhang, Cong Xu, Shuaijie Zhang
Based on the above, we first analyzed the BBR model and concluded that distinguishing different regression samples and using different scales of auxiliary bounding boxes to calculate losses can effectively accelerate the bounding box regression process.
Ranked #1 on
Object Detection
on PASCAL VOC 2007
(mAP@50 metric, using extra
training data)
1 code implementation • 24 Sep 2023 • Cong Xu, Jun Wang, Jianyong Wang, Wei zhang
Embedding plays a key role in modern recommender systems because they are virtual representations of real-world entities and the foundation for subsequent decision-making models.
1 code implementation • 3 Apr 2023 • Giacomo Pedretti, John Moon, Pedro Bruel, Sergey Serebryakov, Ron M. Roth, Luca Buonanno, Tobias Ziegler, Cong Xu, Martin Foltin, Paolo Faraboschi, Jim Ignowski, Catherine E. Graves
In this work, we focus on an overall analog-digital architecture implementing a novel increased precision analog CAM and a programmable network on chip allowing the inference of state-of-the-art tree-based ML models, such as XGBoost and CatBoost.
1 code implementation • 23 Oct 2022 • Yufeng Wang, Cong Xu, Min Yang, Jin Zhang
Although Physics-Informed Neural Networks (PINNs) have been successfully applied in a wide variety of science and engineering fields, they can fail to accurately predict the underlying solution in slightly challenging convection-diffusion-reaction problems.
no code implementations • 25 Jul 2022 • Huaying Hao, Cong Xu, Dan Zhang, Qifeng Yan, Jiong Zhang, Yue Liu, Yitian Zhao
To be more specific, we first perform a simple degradation of the 3x3 mm2/high-resolution (HR) image to obtain the synthetic LR image.
1 code implementation • 25 Feb 2022 • Cong Xu, Wei zhang, Jun Wang, Min Yang
Our theoretical analysis discovers that larger convolutional feature maps before average pooling can contribute to better resistance to perturbations, but the conclusion is not true for max pooling.
no code implementations • 13 Feb 2022 • Yufeng Wang, Dan Li, Cong Xu, Min Yang
Deep image inpainting research mainly focuses on constructing various neural network architectures or imposing novel optimization objectives.
1 code implementation • ACM Transactions on Knowledge Discovery from Data 2022 • Jianliang Gao, Xiaoting Ying, Cong Xu, Jianxin Wang, Shichao Zhang, Zhao Li
For a given group of stocks, the proposed TRAN model can output the ranking results of stocks according to their return ratios.
1 code implementation • 31 Jul 2021 • Yufeng Wang, Dan Li, Cong Xu, Min Yang
However, data augmentation, as a simple yet effective method, has not received enough attention in this area.
1 code implementation • 19 May 2021 • Cong Xu, Xiang Li, Min Yang
Neural networks are susceptible to artificially designed adversarial perturbations.
Ranked #1 on
Adversarial Attack
on CIFAR-10
1 code implementation • 24 Dec 2020 • Cong Xu, Dan Li, Min Yang
Recently proposed adversarial self-supervised learning methods usually require big batches and long training epochs to extract robust features, which will bring heavy computational overhead on platforms with limited resources.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Zhong Zhang, Chongming Gao, Cong Xu, Rui Miao, Qinli Yang, Junming Shao
They call it the representation degeneration problem and propose a cosine regularization to solve it.
no code implementations • 8 Oct 2020 • Cong Xu, Dan Li, Min Yang
It is well-known that deep neural networks are vulnerable to adversarial attacks.
no code implementations • 3 Dec 2019 • Cong Xu, Min Yang, Jin Zhang
The implementation of conventional sparse principal component analysis (SPCA) on high-dimensional data sets has become a time consuming work.
1 code implementation • 21 May 2018 • Wei Wen, Yandan Wang, Feng Yan, Cong Xu, Chunpeng Wu, Yiran Chen, Hai Li
It becomes an open question whether escaping sharp minima can improve the generalization.
1 code implementation • NeurIPS 2017 • Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients.
5 code implementations • ICCV 2017 • Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.