Search Results for author: Yichao Wu

Found 21 papers, 4 papers with code

Maximizing User Experience with LLMOps-Driven Personalized Recommendation Systems

no code implementations1 Apr 2024 Chenxi Shi, Penghao Liang, Yichao Wu, Tong Zhan, Zhengyu Jin

The integration of LLMOps into personalized recommendation systems marks a significant advancement in managing LLM-driven applications.

Navigate Recommendation Systems

ViTCN: Vision Transformer Contrastive Network For Reasoning

no code implementations15 Mar 2024 Bo Song, Yuanhao Xu, Yichao Wu

Machine learning models have achieved significant milestones in various domains, for example, computer vision models have an exceptional result in object recognition, and in natural language processing, where Large Language Models (LLM) like GPT can start a conversation with human-like proficiency.

Object Recognition

Research on the Application of Deep Learning-based BERT Model in Sentiment Analysis

no code implementations13 Mar 2024 Yichao Wu, Zhengyu Jin, Chenxi Shi, Penghao Liang, Tong Zhan

This paper explores the application of deep learning techniques, particularly focusing on BERT models, in sentiment analysis.

Sentiment Analysis

Emerging Synergies Between Large Language Models and Machine Learning in Ecommerce Recommendations

no code implementations5 Mar 2024 Xiaonan Xu, Yichao Wu, Penghao Liang, Yuhang He, Han Wang

With the boom of e-commerce and web applications, recommender systems have become an important part of our daily lives, providing personalized recommendations based on the user's preferences.

Collaborative Filtering Recommendation Systems

LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models

no code implementations28 Feb 2024 Yichao Wu, Yafei Xiang, Shuning Huo, Yulu Gong, Penghao Liang

In addressing the computational and memory demands of fine-tuning Large Language Models(LLMs), we propose LoRA-SP(Streamlined Partial Parameter Adaptation), a novel approach utilizing randomized half-selective parameter freezing within the Low-Rank Adaptation(LoRA)framework.

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

1 code implementation2 Aug 2023 Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice.

Latent Distribution Adjusting for Face Anti-Spoofing

2 code implementations16 May 2023 Qinghong Sun, Zhenfei Yin, Yichao Wu, Yuanhan Zhang, Jing Shao

In this work, we propose a unified framework called Latent Distribution Adjusting (LDA) with properties of latent, discriminative, adaptive, generic to improve the robustness of the FAS model by adjusting complex data distribution with multiple prototypes.

Face Anti-Spoofing Prototype Selection

Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling Augmentation Framework

no code implementations6 May 2023 Ruijia Wu, Yuhang Wang, Huafeng Shi, Zhipeng Yu, Yichao Wu, Ding Liang

In this paper, we propose the Adversarial Decoupling Augmentation Framework (ADAF), addressing these issues by targeting the image-text fusion module to enhance the defensive performance of facial privacy protection algorithms.

Denoising

ICD-Face: Intra-class Compactness Distillation for Face Recognition

no code implementations ICCV 2023 Zhipeng Yu, Jiaheng Liu, Haoyu Qin, Yichao Wu, Kun Hu, Jiayi Tian, Ding Liang

Knowledge distillation is an effective model compression method to improve the performance of a lightweight student model by transferring the knowledge of a well-performed teacher model, which has been widely adopted in many computer vision tasks, including face recognition (FR).

Face Recognition Knowledge Distillation +1

Improving Robust Fairness via Balance Adversarial Training

no code implementations15 Sep 2022 ChunYu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, Xianglong Liu, Aishan Liu

Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem.

Fairness

Universal Backdoor Attacks Detection via Adaptive Adversarial Probe

no code implementations12 Sep 2022 Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu

Most detection methods are designed to verify whether a model is infected with presumed types of backdoor attacks, yet the adversary is likely to generate diverse backdoor attacks in practice that are unforeseen to defenders, which challenge current detection strategies.

Scheduling

DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection

1 code implementation12 Jul 2022 Gang Li, Xiang Li, Yujie Wang, Yichao Wu, Ding Liang, Shanshan Zhang

Specifically, we propose the Inverse NMS Clustering (INC) and Rank Matching (RM) to instantiate the dense supervision, without the widely used, conventional sparse pseudo labels.

object-detection Object Detection +1

Robust Face Anti-Spoofing with Dual Probabilistic Modeling

no code implementations27 Apr 2022 Yuanhan Zhang, Yichao Wu, Zhenfei Yin, Jing Shao, Ziwei Liu

In this work, we attempt to fill this gap by automatically addressing the noise problem from both label and data perspectives in a probabilistic manner.

Face Anti-Spoofing

CoupleFace: Relation Matters for Face Recognition Distillation

no code implementations12 Apr 2022 Jiaheng Liu, Haoyu Qin, Yichao Wu, Jinyang Guo, Ding Liang, Ke Xu

In this work, we observe that mutual relation knowledge between samples is also important to improve the discriminative ability of the learned representation of the student model, and propose an effective face recognition distillation method called CoupleFace by additionally introducing the Mutual Relation Distillation (MRD) into existing distillation framework.

Face Recognition Knowledge Distillation +1

PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised Object Detection

1 code implementation30 Mar 2022 Gang Li, Xiang Li, Yujie Wang, Yichao Wu, Ding Liang, Shanshan Zhang

Specifically, for pseudo labeling, existing works only focus on the classification score yet fail to guarantee the localization precision of pseudo boxes; For consistency training, the widely adopted random-resize training only considers the label-level consistency but misses the feature-level one, which also plays an important role in ensuring the scale invariance.

object-detection Object Detection +1

Knowledge Distillation for Object Detection via Rank Mimicking and Prediction-guided Feature Imitation

no code implementations9 Dec 2021 Gang Li, Xiang Li, Yujie Wang, Shanshan Zhang, Yichao Wu, Ding Liang

Based on the two observations, we propose Rank Mimicking (RM) and Prediction-guided Feature Imitation (PFI) for distilling one-stage detectors, respectively.

Image Classification Knowledge Distillation +3

One to Transfer All: A Universal Transfer Framework for Vision Foundation Model with Few Data

no code implementations24 Nov 2021 Yujie Wang, Junqin Huang, Mengya Gao, Yichao Wu, Zhenfei Yin, Ding Liang, Junjie Yan

Transferring with few data in a general way to thousands of downstream tasks is becoming a trend of the foundation model's application.

INTERN: A New Learning Paradigm Towards General Vision

no code implementations16 Nov 2021 Jing Shao, Siyu Chen, Yangguang Li, Kun Wang, Zhenfei Yin, Yinan He, Jianing Teng, Qinghong Sun, Mengya Gao, Jihao Liu, Gengshi Huang, Guanglu Song, Yichao Wu, Yuming Huang, Fenggang Liu, Huan Peng, Shuo Qin, Chengyu Wang, Yujie Wang, Conghui He, Ding Liang, Yu Liu, Fengwei Yu, Junjie Yan, Dahua Lin, Xiaogang Wang, Yu Qiao

Enormous waves of technological innovations over the past several years, marked by the advances in AI technologies, are profoundly reshaping the industry and the society.

Inter-class Discrepancy Alignment for Face Recognition

no code implementations2 Mar 2021 Jiaheng Liu, Yudong Wu, Yichao Wu, Zhenmao Li, Chen Ken, Ding Liang, Junjie Yan

In this study, we make a key observation that the local con-text represented by the similarities between the instance and its inter-class neighbors1plays an important role forFR.

Face Recognition

DAM: Discrepancy Alignment Metric for Face Recognition

no code implementations ICCV 2021 Jiaheng Liu, Yudong Wu, Yichao Wu, Chuming Li, Xiaolin Hu, Ding Liang, Mengyu Wang

To estimate the LID of each face image in the verification process, we propose two types of LID Estimation (LIDE) methods, which are reference-based and learning-based estimation methods, respectively.

Face Recognition

Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework

no code implementations27 May 2019 Zhenmao Li, Yichao Wu, Ken Chen, Yudong Wu, Shunfeng Zhou, Jiaheng Liu, Junjie Yan

Example weighting algorithm is an effective solution to the training bias problem, however, most previous typical methods are usually limited to human knowledge and require laborious tuning of hyperparameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.