Search Results for author: Shuai Xiao

Found 29 papers, 9 papers with code

TimePro: Efficient Multivariate Long-term Time Series Forecasting with Variable- and Time-Aware Hyper-state

1 code implementation27 May 2025 Xiaowen Ma, ZhenLiang Ni, Shuai Xiao, Xinghao Chen

In long-term time series forecasting, different variables often influence the target variable over distinct time intervals, a challenge known as the multi-delay issue.

Mamba Time Series +1

Contrast-Unity for Partially-Supervised Temporal Sentence Grounding

no code implementations18 Feb 2025 Haicheng Wang, Chen Ju, Weixiong Lin, Chaofan Ma, Shuai Xiao, Ya zhang, Yanfeng Wang

Temporal sentence grounding aims to detect event timestamps described by the natural language query from given untrimmed videos.

Contrastive Learning Denoising +3

Advancing Myopia To Holism: Fully Contrastive Language-Image Pre-training

no code implementations CVPR 2025 Haicheng Wang, Chen Ju, Weixiong Lin, Shuai Xiao, Mengting Chen, Yixuan Huang, Chang Liu, Mingshuai Yao, Jinsong Lan, Ying Chen, Qingwen Liu, Yanfeng Wang

In rapidly evolving field of vision-language models (VLMs), contrastive language-image pre-training (CLIP) has made significant strides, becoming foundation for various downstream tasks.

Image-text Retrieval Image to text +1

Branches, Assemble! Multi-Branch Cooperation Network for Large-Scale Click-Through Rate Prediction at Taobao

no code implementations20 Nov 2024 Xu Chen, Zida Cheng, Yuangang Pan, Shuai Xiao, Xiaoming Liu, Jinsong Lan, Qingwen Liu, Ivor W. Tsang

In this work, we introduce a novel Multi-Branch Cooperation Network (MBCnet) which enables multiple branch networks to collaborate with each other for better complex feature interaction modeling.

Click-Through Rate Prediction Memorization

Counterfactual Learning-Driven Representation Disentanglement for Search-Enhanced Recommendation

no code implementations14 Nov 2024 Jiajun Cui, Xu Chen, Shuai Xiao, Chen Ju, Jinsong Lan, Qingwen Liu, Wei zhang

To address this, we propose a Counterfactual learning-driven representation disentanglement framework for search-enhanced recommendation, based on the common belief that a user would click an item under a query not solely because of the item-query match but also due to the item's query-independent general features (e. g., color or style) that interest the user.

Collaborative Filtering counterfactual +3

DivNet: Diversity-Aware Self-Correcting Sequential Recommendation Networks

no code implementations1 Nov 2024 Shuai Xiao, Zaifan Jiang

As the last stage of a typical \textit{recommendation system}, \textit{collective recommendation} aims to give the final touches to the recommended items and their layout so as to optimize overall objectives such as diversity and whole-page relevance.

Diversity Sequential Recommendation

Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large Models

1 code implementation16 Jul 2024 Chen Ju, Haicheng Wang, Haozhe Cheng, Xu Chen, Zhonghua Zhai, Weilin Huang, Jinsong Lan, Shuai Xiao, Bo Zheng

Vision-Language Large Models (VLMs) recently become primary backbone of AI, due to the impressive performance.

Quantization

Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos

no code implementations26 Apr 2024 Zhengze Xu, Mengting Chen, Zhao Wang, Linyu Xing, Zhonghua Zhai, Nong Sang, Jinsong Lan, Shuai Xiao, Changxin Gao

To generate coherent motions, we first leverage the Kalman filter to construct smooth crops in the focus tunnel and inject the position embedding of the tunnel into attention layers to improve the continuity of the generated videos.

Virtual Try-on

Cell Variational Information Bottleneck Network

no code implementations22 Mar 2024 Zhonghua Zhai, Chen Ju, Jinsong Lan, Shuai Xiao

In this work, we propose Cell Variational Information Bottleneck Network (cellVIB), a convolutional neural network using information bottleneck mechanism, which can be combined with the latest feedforward network architecture in an end-to-end training method.

Face Recognition Representation Learning

Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Models

no code implementations12 Dec 2023 Chen Ju, Haicheng Wang, Zeqian Li, Xu Chen, Zhonghua Zhai, Weilin Huang, Shuai Xiao

Vision-Language Large Models (VLMs) have become primary backbone of AI, due to the impressive performance.

Enhancing Cross-domain Click-Through Rate Prediction via Explicit Feature Augmentation

no code implementations30 Nov 2023 Xu Chen, Zida Cheng, Jiangchao Yao, Chen Ju, Weilin Huang, Jinsong Lan, Xiaoyi Zeng, Shuai Xiao

Later the augmentation network employs the explicit cross-domain knowledge as augmented information to boost the target domain CTR prediction.

Click-Through Rate Prediction Transfer Learning

Forgedit: Text Guided Image Editing via Learning and Forgetting

1 code implementation19 Sep 2023 Shiwen Zhang, Shuai Xiao, Weilin Huang

Text-guided image editing on real or synthetic images, given only the original image itself and the target text prompt as inputs, is a very general and challenging task.

text-guided-image-editing

Automatic Deduction Path Learning via Reinforcement Learning with Environmental Correction

no code implementations16 Jun 2023 Shuai Xiao, Chen Pan, Min Wang, Xinxin Zhu, Siqiao Xue, Jing Wang, Yunhua Hu, James Zhang, Jinghua Feng

To this end, we formulate the problem as a partially observable Markov decision problem (POMDP) and employ an environment correction algorithm based on the characteristics of the business.

Heuristic Search Hierarchical Reinforcement Learning +2

Cross-domain Augmentation Networks for Click-Through Rate Prediction

no code implementations6 May 2023 Xu Chen, Zida Cheng, Shuai Xiao, Xiaoyi Zeng, Weilin Huang

The translation network is able to compute features from two domains with heterogeneous inputs separately by designing two independent branches, and then learn meaningful cross-domain knowledge using a designed cross-supervised feature translator.

Click-Through Rate Prediction Prediction +2

Category-Oriented Representation Learning for Image to Multi-Modal Retrieval

no code implementations6 May 2023 Zida Cheng, Chen Ju, Shuai Xiao, Xu Chen, Zhonghua Zhai, Xiaoyi Zeng, Weilin Huang, Junchi Yan

We focus on representation learning for IMMR and analyze three key challenges for it: 1) skewed data and noisy label in real-world industrial data, 2) the information-inequality between image and text modality of documents when learning representations, 3) effective and efficient training in large-scale industrial contexts.

Cross-Modal Retrieval Image Retrieval +5

Model-based Constrained MDP for Budget Allocation in Sequential Incentive Marketing

no code implementations2 Mar 2023 Shuai Xiao, Le Guo, Zaifan Jiang, Lei Lv, Yuanbo Chen, Jun Zhu, Shuang Yang

Furthermore, we show that the dual problem can be solved by policy learning, with the optimal dual variable being found efficiently via bisection search (i. e., by taking advantage of the monotonicity).

counterfactual Marketing

SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for Dynamic Scenes

no code implementations20 Jun 2022 Wei Li, Shuai Xiao, Tianhong Dai, Shanxin Yuan, Tao Wang, Cheng Li, Fenglong Song

To further leverage these two paradigms, we propose a selective and joint HDR and denoising (SJ-HD$^2$R) imaging framework, utilizing scenario-specific priors to conduct the path selection with an accuracy of more than 93. 3$\%$.

Denoising

Learning Temporal Point Processes via Reinforcement Learning

no code implementations NeurIPS 2018 Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, Le Song

Social goods, such as healthcare, smart city, and information networks, often produce ordered event data in continuous time.

Point Processes reinforcement-learning +2

Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks

2 code implementations24 May 2017 Shuai Xiao, Junchi Yan, Stephen M. Chu, Xiaokang Yang, Hongyuan Zha

In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics.

Point Processes Time Series +1

Wasserstein Learning of Deep Generative Point Process Models

1 code implementation NeurIPS 2017 Shuai Xiao, Mehrdad Farajtabar, Xiaojing Ye, Junchi Yan, Le Song, Hongyuan Zha

Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena.

Point Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.