Search Results for author: Cong Xu

Found 31 papers, 17 papers with code

DropletVideo: A Dataset and Approach to Explore Integral Spatio-Temporal Consistent Video Generation

1 code implementation8 Mar 2025 Runze Zhang, Guoguang Du, Xiaochuan Li, Qi Jia, Liang Jin, Lu Liu, Jingjing Wang, Cong Xu, Zhenhua Guo, YaQian Zhao, Xiaoli Gong, RenGang Li, Baoyu Fan

Prior research, especially in open-source projects, primarily focuses on either temporal or spatial consistency, or their basic combination, such as appending a description of a camera movement after a prompt without constraining the outcomes of this movement.

Video Generation

Collaborative Filtering Meets Spectrum Shift: Connecting User-Item Interaction with Graph-Structured Side Information

1 code implementation12 Feb 2025 Yunhang He, Cong Xu, Jun Wang, Wei zhang

However, when graph-structured side information (e. g., multimodal similarity graphs or social networks) is integrated into the U-I bipartite graph, existing graph collaborative filtering methods fall short of achieving satisfactory performance.

Collaborative Filtering Multimodal Recommendation

STAIR: Manipulating Collaborative and Multimodal Information for E-Commerce Recommendation

1 code implementation16 Dec 2024 Cong Xu, Yunhang He, Jun Wang, Wei zhang

In order to combine the two distinct types of information, some additional challenges are encountered: 1) Modality erasure: Vanilla graph convolution, which proves rather useful in collaborative filtering, however erases multimodal information; 2) Modality forgetting: Multimodal information tends to be gradually forgotten as the recommendation loss essentially facilitates the learning of collaborative information.

Collaborative Filtering Multimodal Recommendation

CAdam: Confidence-Based Optimization for Online Learning

no code implementations29 Nov 2024 Shaowen Wang, AnAn Liu, Jian Xiao, Huan Liu, Yuekui Yang, Cong Xu, Qianqian Pu, Suncong Zheng, Wei zhang, Jian Li

Modern recommendation systems frequently employ online learning to dynamically update their models with freshly collected data.

Recommendation Systems

An Enhanced-State Reinforcement Learning Algorithm for Multi-Task Fusion in Large-Scale Recommender Systems

no code implementations18 Sep 2024 Peng Liu, Jiawei Zhu, Cong Xu, Ming Zhao, Bin Wang

However, limited by their modeling pattern, all the current RL-MTF methods can only utilize user features as the state to generate actions for each user, but unable to make use of item features and other valuable features, which leads to suboptimal results.

Multi-Task Learning Recommendation Systems +1

Are LLM-based Recommenders Already the Best? Simple Scaled Cross-entropy Unleashes the Potential of Traditional Sequential Recommenders

1 code implementation26 Aug 2024 Cong Xu, Zhangchi Zhu, Mo Yu, Jun Wang, Jianyong Wang, Wei zhang

Some studies have observed that LLMs, when fine-tuned by the cross-entropy (CE) loss with a full softmax, could achieve `state-of-the-art' performance in sequential recommendation.

Sequential Recommendation

Data Efficient Evaluation of Large Language Models and Text-to-Image Models via Adaptive Sampling

no code implementations21 Jun 2024 Cong Xu, Gayathri Saranathan, Mahammad Parwez Alam, Arpit Shah, James Lim, Soon Yee Wong, Foltin Martin, Suparna Bhattacharya

Empirical analysis across six NLP benchmarks reveals that: (1) quality-based sampling consistently achieves strong correlations (0. 85 to 0. 95) with full datasets at a 10\% sampling rate such as Quality SE and Quality CPD (2) clustering methods excel in specific benchmarks such as MMLU (3) no single method universally outperforms others across all metrics.

Clustering MMLU

Comparing remote sensing-based forest biomass mapping approaches using new forest inventory plots in contrasting forests in northeastern and southwestern China

no code implementations24 May 2024 Wenquan Dong, Edward T. A. Mitchard, Yuwei Chen, Man Chen, Congfeng Cao, Peilun Hu, Cong Xu, Steven Hancock

We then applied LightGBM and random forest regression to generate wall-to-wall AGB maps at 25 m resolution, using extensive GEDI footprints as well as Sentinel-1 data, ALOS-2 PALSAR-2 and Sentinel-2 optical data.

Enhancing User Interest based on Stream Clustering and Memory Networks in Large-Scale Recommender Systems

no code implementations21 May 2024 Peng Liu, Nian Wang, Cong Xu, Ming Zhao, Bin Wang, Yi Ren

UIE enhances user interest including user profile and user history behavior sequences by leveraging the enhancement vectors and personalized enhancement vectors generated based on dynamic streaming clustering of similar users and items from multiple perspectives, which are stored and updated in memory networks.

Clustering Recommendation Systems +1

Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline

no code implementations15 May 2024 Qi Jia, Baoyu Fan, Cong Xu, Lu Liu, Liang Jin, Guoguang Du, Zhenhua Guo, YaQian Zhao, Xuanjing Huang, RenGang Li

In light of this, we introduces a novel research task, Multi-modal Sentiment Analysis for Comment Response of Video Induced(MSA-CRVI), aims to inferring opinions and emotions according to the comments response to micro video.

Sentiment Analysis

An Offline Reinforcement Learning Algorithm Customized for Multi-Task Fusion in Large-Scale Recommender Systems

no code implementations19 Apr 2024 Peng Liu, Cong Xu, Ming Zhao, Jiawei Zhu, Bin Wang, Yi Ren

IntegratedRL-MTF integrates offline RL model with our online exploration policy to relax overstrict and complicated constraints, which significantly improves its performance.

Efficient Exploration Multi-Task Learning +3

Pattern-wise Transparent Sequential Recommendation

no code implementations18 Feb 2024 Kun Ma, Cong Xu, Zeyuan Chen, Wei zhang

It breaks the sequence of items into multi-level patterns that serve as atomic units throughout the recommendation process.

Decision Making Sequential Recommendation

Understanding the Role of Cross-Entropy Loss in Fairly Evaluating Large Language Model-based Recommendation

no code implementations9 Feb 2024 Cong Xu, Zhangchi Zhu, Jun Wang, Jianyong Wang, Wei zhang

Large language models (LLMs) have gained much attention in the recommendation community; some studies have observed that LLMs, fine-tuned by the cross-entropy loss with a full softmax, could achieve state-of-the-art performance already.

Language Modeling Language Modelling +1

EEGPT: Pretrained Transformer for Universal and Reliable Representation of EEG Signals

1 code implementation NeurIPS 2024 2024 Guagnyu Wang, Wenchao Liu, Yuhong He, Cong Xu, Lin Ma, Haifeng Li

However, challenges such as low signal-to-noise ratio (SNR), high inter-subject variability, and channel mismatch complicate the extraction of robust, universal EEG representations.

EEG Representation Learning +1

Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box

1 code implementation6 Nov 2023 Hao Zhang, Cong Xu, Shuaijie Zhang

Based on the above, we first analyzed the BBR model and concluded that distinguishing different regression samples and using different scales of auxiliary bounding boxes to calculate losses can effectively accelerate the bounding box regression process.

 Ranked #1 on Object Detection on PASCAL VOC 2007 (mAP@50 metric, using extra training data)

Object Detection regression

Graph-enhanced Optimizers for Structure-aware Recommendation Embedding Evolution

1 code implementation24 Sep 2023 Cong Xu, Jun Wang, Jianyong Wang, Wei zhang

Embedding plays a key role in modern recommender systems because they are virtual representations of real-world entities and the foundation for subsequent decision-making models.

Decision Making Graph Neural Network

X-TIME: An in-memory engine for accelerating machine learning on tabular data with CAMs

1 code implementation3 Apr 2023 Giacomo Pedretti, John Moon, Pedro Bruel, Sergey Serebryakov, Ron M. Roth, Luca Buonanno, Tobias Ziegler, Cong Xu, Martin Foltin, Paolo Faraboschi, Jim Ignowski, Catherine E. Graves

In this work, we focus on an overall analog-digital architecture implementing a novel increased precision analog CAM and a programmable network on chip allowing the inference of state-of-the-art tree-based ML models, such as XGBoost and CatBoost.

scientific discovery

Less Emphasis on Difficult Layer Regions: Curriculum Learning for Singularly Perturbed Convection-Diffusion-Reaction Problems

1 code implementation23 Oct 2022 Yufeng Wang, Cong Xu, Min Yang, Jin Zhang

Although Physics-Informed Neural Networks (PINNs) have been successfully applied in a wide variety of science and engineering fields, they can fail to accurately predict the underlying solution in slightly challenging convection-diffusion-reaction problems.

Sparse-based Domain Adaptation Network for OCTA Image Super-Resolution Reconstruction

no code implementations25 Jul 2022 Huaying Hao, Cong Xu, Dan Zhang, Qifeng Yan, Jiong Zhang, Yue Liu, Yitian Zhao

To be more specific, we first perform a simple degradation of the 3x3 mm2/high-resolution (HR) image to obtain the synthetic LR image.

Domain Adaptation Image Super-Resolution

Understanding Adversarial Robustness from Feature Maps of Convolutional Layers

1 code implementation25 Feb 2022 Cong Xu, Wei zhang, Jun Wang, Min Yang

Our theoretical analysis discovers that larger convolutional feature maps before average pooling can contribute to better resistance to perturbations, but the conclusion is not true for max pooling.

Adversarial Robustness

Improve Deep Image Inpainting by Emphasizing the Complexity of Missing Regions

no code implementations13 Feb 2022 Yufeng Wang, Dan Li, Cong Xu, Min Yang

Deep image inpainting research mainly focuses on constructing various neural network architectures or imposing novel optimization objectives.

Image Inpainting

Missingness Augmentation: A General Approach for Improving Generative Imputation Models

1 code implementation31 Jul 2021 Yufeng Wang, Dan Li, Cong Xu, Min Yang

However, data augmentation, as a simple yet effective method, has not received enough attention in this area.

Data Augmentation Imputation

Adversarial Momentum-Contrastive Pre-Training

1 code implementation24 Dec 2020 Cong Xu, Dan Li, Min Yang

Recently proposed adversarial self-supervised learning methods usually require big batches and long training epochs to extract robust features, which will bring heavy computational overhead on platforms with limited resources.

Contrastive Learning Data Augmentation +1

A Fast deflation Method for Sparse Principal Component Analysis via Subspace Projections

no code implementations3 Dec 2019 Cong Xu, Min Yang, Jin Zhang

The implementation of conventional sparse principal component analysis (SPCA) on high-dimensional data sets has become a time consuming work.

Coordinating Filters for Faster Deep Neural Networks

5 code implementations ICCV 2017 Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li

Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.

Cannot find the paper you are looking for? You can Submit a new open access paper.