Search Results for author: Shengsheng Wang

Found 13 papers, 7 papers with code

Robust Misinformation Detection by Visiting Potential Commonsense Conflict

1 code implementation30 Apr 2025 Bing Wang, Ximing Li, Changchun Li, Bingrui Zhao, Bo Fu, Renchu Guan, Shengsheng Wang

In this paper, we propose a novel plug-and-play augmentation method for the MD task, namely Misinformation Detection with Potential Commonsense Conflict (MD-PCC).

Misinformation Triplet

Revisiting CLIP for SF-OSDA: Unleashing Zero-Shot Potential with Adaptive Threshold and Training-Free Feature Filtering

no code implementations19 Apr 2025 Yongguang Li, Jindong Li, Qi Wang, Qianli Xing, Runliang Niu, Shengsheng Wang, Menglin Yang

Source-Free Unsupervised Open-Set Domain Adaptation (SF-OSDA) methods using CLIP face significant issues: (1) while heavily dependent on domain-specific threshold selection, existing methods employ simple fixed thresholds, underutilizing CLIP's zero-shot potential in SF-OSDA scenarios; and (2) overlook intrinsic class tendencies while employing complex training to enforce feature separation, incurring deployment costs and feature shifts that compromise CLIP's generalization ability.

Domain Adaptation

Collaboration and Controversy Among Experts: Rumor Early Detection by Tuning a Comment Generator

1 code implementation5 Apr 2025 Bing Wang, Bingrui Zhao, Ximing Li, Changchun Li, Wanfu Gao, Shengsheng Wang

However, these RD methods often fail in the early stages of rumor propagation when only limited user comments are available, leading the community to focus on a more challenging topic named Rumor Early Detection (RED).

Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation

1 code implementation30 Jan 2025 Yanlong Li, Jindong Li, Qi Wang, Menglin Yang, He Kong, Shengsheng Wang

Large language models based Multi Agent Systems (MAS) have demonstrated promising performance for enhancing the efficiency and accuracy of code generation tasks.

Code Generation Hippocampus

Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation

no code implementations21 Oct 2024 Yongguang Li, Yueqi Cao, Jindong Li, Qi Wang, Shengsheng Wang

Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples, addressing critical data privacy concerns.

Unsupervised Domain Adaptation

Why Misinformation is Created? Detecting them by Integrating Intent Features

no code implementations27 Jul 2024 Bing Wang, Ximing Li, Changchun Li, Bo Fu, Songwen Pei, Shengsheng Wang

Accordingly, we propose to reason the intent of articles and form the corresponding intent features to promote the veracity discrimination of article features.

Decoder Misinformation

Harmfully Manipulated Images Matter in Multimodal Misinformation Detection

1 code implementation27 Jul 2024 Bing Wang, Shengsheng Wang, Changchun Li, Renchu Guan, Ximing Li

Accordingly, in this work, we propose to detect misinformation by learning manipulation features that indicate whether the image has been manipulated, as well as intention features regarding the harmful and harmless intentions of the manipulation.

Image Manipulation Image Manipulation Detection +1

Training-Free Unsupervised Prompt for Vision-Language Models

1 code implementation25 Apr 2024 Sifan Long, Linbin Wang, Zhen Zhao, Zichang Tan, Yiming Wu, Shengsheng Wang, Jingdong Wang

In light of this, we propose Training-Free Unsupervised Prompts (TFUP), which maximally preserves the inherent representation capabilities and enhances them with a residual connection to similarity-based prediction probabilities in a training-free and labeling-free manner.

Prompt Learning

Unsupervised Sentence Representation Learning with Frequency-induced Adversarial Tuning and Incomplete Sentence Filtering

1 code implementation15 May 2023 Bing Wang, Ximing Li, Zhiyao Yang, Yuanyuan Guan, Jiayin Li, Shengsheng Wang

To solve the problems, we fine-tune PLMs by leveraging the frequency information of words and propose a novel USRL framework, namely Sentence Representation Learning with Frequency-induced Adversarial tuning and Incomplete sentence filtering (SLT-FAI).

Language Modelling Sentence +1

Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers

1 code implementation CVPR 2023 Sifan Long, Zhen Zhao, Jimin Pi, Shengsheng Wang, Jingdong Wang

In this paper, we emphasize the cruciality of diverse global semantics and propose an efficient token decoupling and merging method that can jointly consider the token importance and diversity for token pruning.

Computational Efficiency Diversity +1

Next-item Recommendations in Short Sessions

no code implementations15 Jul 2021 Wenzhuo Song, Shoujin Wang, Yan Wang, Shengsheng Wang

The obtained similar sessions are then utilized to complement and optimize the preference representation learned from the current short session by the local module for more accurate next-item recommendations in this short session.

Few-Shot Learning Recommendation Systems +1

Cannot find the paper you are looking for? You can Submit a new open access paper.