Search Results for author: Jitao Sang

Found 50 papers, 19 papers with code

A LLM-based Controllable, Scalable, Human-Involved User Simulator Framework for Conversational Recommender Systems

no code implementations13 May 2024 Lixi Zhu, Xiaowen Huang, Jitao Sang

Through experiments and case studies in two conversational recommendation scenarios, we show that our framework can adapt to a variety of conversational recommendation settings and effectively simulate users' personalized preferences.

Recommendation Systems

Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library

1 code implementation27 Apr 2024 Lei Cheng, Xiaowen Huang, Jitao Sang, Jian Yu

In the adversarial robustness, we introduce the fundamental principles and classical methods of recommender system adversarial attacks and defenses.

Adversarial Robustness Non-Adversarial Robustness +1

Prescribing the Right Remedy: Mitigating Hallucinations in Large Vision-Language Models via Targeted Instruction Tuning

no code implementations16 Apr 2024 Rui Hu, Yahan Tu, Jitao Sang

In this paper, we propose a targeted instruction data generation framework named DFTG that tailored to the hallucination specificity of different models.

Hallucination Specificity

Inference-Time Rule Eraser: Distilling and Removing Bias Rules to Mitigate Bias in Deployed Models

no code implementations7 Apr 2024 Yi Zhang, Jitao Sang

Machine learning models often make predictions based on biased features such as gender, race, and other social attributes, posing significant fairness risks, especially in societal applications, such as hiring, banking, and criminal justice.

Decision Making Fairness

Exploring the Privacy Protection Capabilities of Chinese Large Language Models

no code implementations27 Mar 2024 YuQi Yang, Xiaowen Huang, Jitao Sang

Large language models (LLMs), renowned for their impressive capabilities in various tasks, have significantly advanced artificial intelligence.

How Reliable is Your Simulator? Analysis on the Limitations of Current LLM-based User Simulators for Conversational Recommendation

no code implementations25 Mar 2024 Lixi Zhu, Xiaowen Huang, Jitao Sang

Through multiple experiments on two widely-used datasets in the field of conversational recommendation, we highlight several issues with the current evaluation methods for user simulators based on LLMs: (1) Data leakage, which occurs in conversational history and the user simulator's replies, results in inflated evaluation results.

Recommendation Systems

AIGCs Confuse AI Too: Investigating and Explaining Synthetic Image-induced Hallucinations in Large Vision-Language Models

no code implementations13 Mar 2024 YiFei Gao, Jiaqi Wang, Zhiyu Lin, Jitao Sang

Remarkably, our findings shed light on a consistent AIGC \textbf{hallucination bias}: the object hallucinations induced by synthetic images are characterized by a greater quantity and a more uniform position distribution, even these synthetic images do not manifest unrealistic or additional relevant visual features compared to natural images.


Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning

1 code implementation1 Feb 2024 Jitao Sang, Yuhang Wang, Jing Zhang, Yanxu Zhu, Chao Kong, Junhong Ye, Shuyu Wei, Jinlin Xiao

In the first phase, based on human supervision, the quality of weak supervision is enhanced through a combination of scalable oversight and ensemble learning, reducing the capability gap between weak teachers and strong students.

Ensemble Learning In-Context Learning

Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception

1 code implementation29 Jan 2024 Junyang Wang, Haiyang Xu, Jiabo Ye, Ming Yan, Weizhou Shen, Ji Zhang, Fei Huang, Jitao Sang

To assess the performance of Mobile-Agent, we introduced Mobile-Eval, a benchmark for evaluating mobile device operations.

Language-assisted Vision Model Debugger: A Sample-Free Approach to Finding and Fixing Bugs

no code implementations9 Dec 2023 Chaoquan Jiang, Jinqiang Wang, Rui Hu, Jitao Sang

To address this issue, We propose a language-assisted diagnostic method that uses texts instead of images to diagnose bugs in vision models based on multi-modal models (eg CLIP).

Language Modelling Large Language Model

CDEval: A Benchmark for Measuring the Cultural Dimensions of Large Language Models

1 code implementation28 Nov 2023 Yuhang Wang, Yanxu Zhu, Chao Kong, Shuyu Wei, Xiaoyuan Yi, Xing Xie, Jitao Sang

This benchmark serves as a valuable resource for cultural studies in LLMs, paving the way for more culturally aware and sensitive models.

Adversarial Prompt Tuning for Vision-Language Models

1 code implementation19 Nov 2023 Jiaming Zhang, Xingjun Ma, Xin Wang, Lingyu Qiu, Jiaqi Wang, Yu-Gang Jiang, Jitao Sang

With the rapid advancement of multimodal learning, pre-trained Vision-Language Models (VLMs) such as CLIP have demonstrated remarkable capacities in bridging the gap between visual and language modalities.

Adversarial Robustness

AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination Evaluation

1 code implementation13 Nov 2023 Junyang Wang, Yuhang Wang, Guohai Xu, Jing Zhang, Yukai Gu, Haitao Jia, Jiaqi Wang, Haiyang Xu, Ming Yan, Ji Zhang, Jitao Sang

Despite making significant progress in multi-modal tasks, current Multi-modal Large Language Models (MLLMs) encounter the significant challenge of hallucinations, which may lead to harmful consequences.

Attribute Hallucination +2

Evaluation and Analysis of Hallucination in Large Vision-Language Models

1 code implementation29 Aug 2023 Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang

In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework.

Hallucination Hallucination Evaluation

Benign Shortcut for Debiasing: Fair Visual Recognition via Intervention with Shortcut Features

no code implementations13 Aug 2023 Yi Zhang, Jitao Sang, Junyang Wang, Dongmei Jiang, YaoWei Wang

To this end, we propose \emph{Shortcut Debiasing}, to first transfer the target task's learning of bias attributes from bias features to shortcut features, and then employ causal intervention to eliminate shortcut features during inference.


CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility

1 code implementation19 Jul 2023 Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou

In this paper, we present CValues, the first Chinese human values evaluation benchmark to measure the alignment ability of LLMs in terms of both safety and responsibility criteria.

Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks

no code implementations13 Jul 2023 Jiaming Zhang, Jitao Sang, Qi Yi, Changsheng Xu

Harnessing the concept of non-robust features, we elaborate on two guiding principles for surrogate model selection to explain why the foundational model is an optimal choice for this role.

Adversarial Attack Attribute +1

Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction

1 code implementation6 Jun 2023 Yuhang Wang, Dongyuan Lu, Chao Kong, Jitao Sang

Many works employed prompt tuning methods to automatically optimize prompt queries and extract the factual knowledge stored in Pretrained Language Models.


Towards Black-box Adversarial Example Detection: A Data Reconstruction-based Method

no code implementations3 Jun 2023 YiFei Gao, Zhiyu Lin, Yunfan Yang, Jitao Sang

Black-box attack, which is a more realistic threat and has led to various black-box adversarial training-based defense methods, however, does not attract considerable attention in adversarial example detection.

Adversarial Defense

Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo Chamber

1 code implementation6 May 2023 Rui Hu, Yahan Tu, Jitao Sang

This paper first presents experimental analyses revealing that the existing biased models overfit to bias-conflicting samples in the training data, which negatively impacts the debiasing performance of the target models.

From Association to Generation: Text-only Captioning by Unsupervised Cross-modal Mapping

1 code implementation26 Apr 2023 Junyang Wang, Ming Yan, Yi Zhang, Jitao Sang

Although previous works have created generation capacity for CLIP through additional language models, a modality gap between the CLIP representations of different modalities and the inability of CLIP to model the offset of this gap, which fails the concept to transfer across modalities.

Decoder Image Captioning +4

Improved Visual Fine-tuning with Natural Language Supervision

1 code implementation ICCV 2023 Junyang Wang, Yuanhong Xu, Juhua Hu, Ming Yan, Jitao Sang, Qi Qian

Fine-tuning a visual pre-trained model can leverage the semantic information from large-scale pre-training data and mitigate the over-fitting problem on downstream vision tasks with limited training examples.

Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias

no code implementations1 Mar 2023 Shangxi Wu, Qiuyang He, Fangzhao Wu, Jitao Sang, YaoWei Wang, Changsheng Xu

In this work, we found that the backdoor attack can construct an artificial bias similar to the model bias derived in standard training.

Backdoor Attack Knowledge Distillation

Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

1 code implementation CVPR 2023 Jiaming Zhang, Xingjun Ma, Qi Yi, Jitao Sang, Yu-Gang Jiang, YaoWei Wang, Changsheng Xu

Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains.

Data Poisoning

Zero-shot Image Captioning by Anchor-augmented Vision-Language Space Alignment

no code implementations14 Nov 2022 Junyang Wang, Yi Zhang, Ming Yan, Ji Zhang, Jitao Sang

We further propose Anchor Augment to guide the generative model's attention to the fine-grained information in the representation of CLIP.

Computational Efficiency Image Captioning +2

Fair Visual Recognition via Intervention with Proxy Features

no code implementations2 Nov 2022 Yi Zhang, Jitao Sang, Junyang Wang

To this end, we propose \emph{Proxy Debiasing}, to first transfer the target task's learning of bias information from bias features to artificial proxy features, and then employ causal intervention to eliminate proxy features in inference.


FairCLIP: Social Bias Elimination based on Attribute Prototype Learning and Representation Neutralization

no code implementations26 Oct 2022 Junyang Wang, Yi Zhang, Jitao Sang

Although FairCLIP is used to eliminate bias in image retrieval, it achieves the neutralization of the representation which is common to all CLIP downstream tasks.

Attribute Fairness +2

Counterfactually Measuring and Eliminating Social Bias in Vision-Language Pre-training Models

1 code implementation3 Jul 2022 Yi Zhang, Junyang Wang, Jitao Sang

Vision-Language Pre-training (VLP) models have achieved state-of-the-art performance in numerous cross-modal tasks.


Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

no code implementations19 Jun 2022 Jiaming Zhang, Qi Yi, Dongyuan Lu, Jitao Sang

In light of the growing concerns regarding the unauthorized use of facial recognition systems and its implications on individual privacy, the exploration of adversarial perturbations as a potential countermeasure has gained traction.

Face Recognition

Towards Adversarial Attack on Vision-Language Pre-training Models

1 code implementation19 Jun 2022 Jiaming Zhang, Qi Yi, Jitao Sang

While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored.

Adversarial Attack Adversarial Robustness

Investigating and Explaining the Frequency Bias in Image Classification

1 code implementation6 May 2022 Zhiyu Lin, YiFei Gao, Jitao Sang

Specifically, our investigations verify that the spectral density of datasets mainly affects the learning priority, while the class consistency mainly affects the feature discrimination.

Classification Image Classification

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

no code implementations17 Nov 2021 Jitao Sang, Jinqiang Wang, Rui Hu, Chaoquan Jiang

Deep network models perform excellently on In-Distribution (ID) data, but can significantly fail on Out-Of-Distribution (OOD) data.

Knowledge Graph-enhanced Sampling for Conversational Recommender System

1 code implementation13 Oct 2021 Mengyuan Zhao, Xiaowen Huang, Lixi Zhu, Jitao Sang, Jian Yu

Then, two samplers are designed to enhance knowledge by sampling fuzzy samples with high uncertainty for obtaining user preferences and reliable negative samples for updating recommender to achieve efficient acquisition of user preferences and model updating, and thus provide a powerful solution for CRS to deal with E&E problem.

Recommendation Systems

Towards Predictable Feature Attribution: Revisiting and Improving Guided BackPropagation

no code implementations29 Sep 2021 Guanhua Zheng, Jitao Sang, Wang Haonan, Changsheng Xu

Recently, backpropagation(BP)-based feature attribution methods have been widely adopted to interpret the internal mechanisms of convolutional neural networks (CNNs), and expected to be human-understandable (lucidity) and faithful to decision-making processes (fidelity).

Decision Making

Benign Adversarial Attack: Tricking Models for Goodness

no code implementations26 Jul 2021 Jitao Sang, Xian Zhao, Jiaming Zhang, Zhiyu Lin

In spite of the successful application in many fields, machine learning models today suffer from notorious problems like vulnerability to adversarial examples.

Adversarial Attack Attribute +2

An Experimental Study of Semantic Continuity for Deep Learning Models

no code implementations19 Nov 2020 Shangxi Wu, Jitao Sang, Xian Zhao, Lizhang Chen

Deep learning models suffer from the problem of semantic discontinuity: small perturbations in the input space tend to cause semantic-level interference to the model output.

Adversarial Robustness

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

no code implementations27 Jul 2020 Yi Zhang, Jitao Sang

Our data analysis on facial attribute recognition demonstrates (1) the attribution of model bias from imbalanced training data distribution and (2) the potential of adversarial examples in balancing data distribution.

Adversarial Attack Attribute +4

Adversarial Privacy-preserving Filter

2 code implementations25 Jul 2020 Jiaming Zhang, Jitao Sang, Xian Zhao, Xiaowen Huang, Yanfeng Sun, Yongli Hu

While widely adopted in practical applications, face recognition has been critically discussed regarding the malicious use of face images and the potential privacy problems, e. g., deceiving payment system and causing personal sabotage.

Adversarial Attack Face Recognition +1

MMCGAN: Generative Adversarial Network with Explicit Manifold Prior

no code implementations18 Jun 2020 Guanhua Zheng, Jitao Sang, Changsheng Xu

Since the basic assumption of conventional manifold learning fails in case of sparse and uneven data distribution, we introduce a new target, Minimum Manifold Coding (MMC), for manifold learning to encourage simple and unfolded manifold.

Generative Adversarial Network

Adaptive Adversarial Logits Pairing

no code implementations25 May 2020 Shangxi Wu, Jitao Sang, Kaiyuan Xu, Guanhua Zheng, Changsheng Xu

Specifically, AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features, and an adaptive sample weighting module by setting sample-specific training weights to balance between logits pairing loss and classification loss.

Classification General Classification +1

A Generalization Theory based on Independent and Task-Identically Distributed Assumption

no code implementations28 Nov 2019 Guanhua Zheng, Jitao Sang, Houqiang Li, Jian Yu, Changsheng Xu

The derived generalization bound based on the ITID assumption identifies the significance of hypothesis invariance in guaranteeing generalization performance.

Image Classification

Attention, Please! Adversarial Defense via Activation Rectification and Preservation

no code implementations24 Nov 2018 Shangxi Wu, Jitao Sang, Kaiyuan Xu, Jiaming Zhang, Jian Yu

This study provides a new understanding of the adversarial attack problem by examining the correlation between adversarial attack and visual attention change.

Adversarial Attack Adversarial Defense

Understanding Deep Learning Generalization by Maximum Entropy

no code implementations ICLR 2018 Guanhua Zheng, Jitao Sang, Changsheng Xu

DNN is then regarded as approximating the feature conditions with multilayer feature learning, and proved to be a recursive solution towards maximum entropy principle.


Cannot find the paper you are looking for? You can Submit a new open access paper.