Search Results for author: Lixin Fan

Found 63 papers, 15 papers with code

PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models

no code implementations18 Jun 2024 Tao Fan, Yan Kang, Weijing Chen, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang

In the context of real-world applications, leveraging large language models (LLMs) for domain-specific tasks often faces two major challenges: domain-specific knowledge privacy and constrained resources.

Decoder Language Modelling +3

FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation

no code implementations3 Jun 2024 Hanlin Gu, Jiahuan Luo, Yan Kang, Yuan YAO, Gongxi Zhu, Bowen Li, Lixin Fan, Qiang Yang

Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data.

Privacy Preserving Vertical Federated Learning

No Free Lunch Theorem for Privacy-Preserving LLM Inference

no code implementations31 May 2024 Xiaojin Zhang, Yulin Fei, Yan Kang, Wei Chen, Lixin Fan, Hai Jin, Qiang Yang

Therefore, it is essential to evaluate the balance between the risk of privacy leakage and loss of utility when conducting effective protection mechanisms.

Privacy Preserving

Unlearning during Learning: An Efficient Federated Machine Unlearning Method

no code implementations24 May 2024 Hanlin Gu, Gongxi Zhu, Jie Zhang, Xinyuan Zhao, Yuxing Han, Lixin Fan, Qiang Yang

To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged.

Federated Learning Machine Unlearning

Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity

no code implementations23 May 2024 Hanlin Gu, WinKent Ong, Chee Seng Chan, Lixin Fan

The advent of Federated Learning (FL) highlights the practical necessity for the 'right to be forgotten' for all clients, allowing them to request data deletion from the machine learning model's service provider.

Federated Learning

Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data

no code implementations23 May 2024 Haoran Li, Xinyuan Zhao, Dadi Guo, Hanlin Gu, Ziqian Zeng, Yuxing Han, Yangqiu Song, Lixin Fan, Qiang Yang

In this paper, we introduce a Federated Domain-specific Knowledge Transfer (FDKT) framework, which enables domain-specific knowledge transfer from LLMs to SLMs while preserving clients' data privacy.

Federated Learning Transfer Learning

A Survey on Contribution Evaluation in Vertical Federated Learning

1 code implementation3 May 2024 Yue Cui, Chung-ju Huang, Yuzhu Zhang, Leye Wang, Lixin Fan, Xiaofang Zhou, Qiang Yang

Vertical Federated Learning (VFL) has emerged as a critical approach in machine learning to address privacy concerns associated with centralized data storage and processing.

Vertical Federated Learning

FedEval-LLM: Federated Evaluation of Large Language Models on Downstream Tasks with Collective Wisdom

no code implementations18 Apr 2024 Yuanqin He, Yan Kang, Lixin Fan, Qiang Yang

To address these issues, we propose a Federated Evaluation framework of Large Language Models, named FedEval-LLM, that provides reliable performance measurements of LLMs on downstream tasks without the reliance on labeled test sets and external tools, thus ensuring strong privacy-preserving capability.

Federated Learning Privacy Preserving

Hyperparameter Optimization for SecureBoost via Constrained Multi-Objective Federated Learning

no code implementations6 Apr 2024 Yan Kang, Ziyao Ren, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang

This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.

Bayesian Optimization Hyperparameter Optimization +2

FedCQA: Answering Complex Queries on Multi-Source Knowledge Graphs via Federated Learning

no code implementations22 Feb 2024 Qi Hu, Weifeng Jiang, Haoran Li, ZiHao Wang, Jiaxin Bai, Qianren Mao, Yangqiu Song, Lixin Fan, JianXin Li

An entity can be involved in various knowledge graphs and reasoning on multiple KGs and answering complex queries on multi-source KGs is important in discovering knowledge cross graphs.

Complex Query Answering Federated Learning +2

Evaluating Membership Inference Attacks and Defenses in Federated Learning

1 code implementation9 Feb 2024 Gongxi Zhu, Donghao Li, Hanlin Gu, Yuxing Han, Yuan YAO, Lixin Fan, Qiang Yang

Firstly, combining model information from multiple communication rounds (Multi-temporal) enhances the overall effectiveness of MIAs compared to utilizing model information from a single epoch.

Federated Learning

Reconstructing Close Human Interactions from Multiple Views

1 code implementation29 Jan 2024 Qing Shuai, Zhiyuan Yu, Zhize Zhou, Lixin Fan, Haijun Yang, Can Yang, Xiaowei Zhou

This paper addresses the challenging task of reconstructing the poses of multiple individuals engaged in close interactions, captured by multiple calibrated cameras.

Pose Estimation

Grounding Foundation Models through Federated Transfer Learning: A General Framework

no code implementations29 Nov 2023 Yan Kang, Tao Fan, Hanlin Gu, Xiaojin Zhang, Lixin Fan, Qiang Yang

Motivated by the strong growth in FTL-FM research and the potential impact of FTL-FM on industrial applications, we propose an FTL-FM framework that formulates problems of grounding FMs in the federated learning setting, construct a detailed taxonomy based on the FTL-FM framework to categorize state-of-the-art FTL-FM works, and comprehensively overview FTL-FM works based on the proposed taxonomy.

Federated Learning Privacy Preserving +1

A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models

no code implementations24 Oct 2023 Yuanfeng Song, Yuanqin He, Xuefang Zhao, Hanlin Gu, Di Jiang, Haijun Yang, Lixin Fan, Qiang Yang

The springing up of Large Language Models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm.

Multi-Task Learning Prompt Engineering

FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models

1 code implementation16 Oct 2023 Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang

FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.

Federated Learning Privacy Preserving

SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning

no code implementations20 Jul 2023 Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang

To fill this gap, we propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto optimal solutions that each solution is a set of hyperparameters achieving optimal tradeoff between utility loss, training cost, and privacy leakage.

Privacy Preserving Vertical Federated Learning

Temporal Gradient Inversion Attacks with Robust Optimization

no code implementations13 Jun 2023 Bowen Li, Hanlin Gu, Ruoxin Chen, Jie Li, Chentao Wu, Na Ruan, Xueming Si, Lixin Fan

We investigate a Temporal Gradient Inversion Attack with a Robust Optimization framework, called TGIAs-RO, which recovers private data without any prior knowledge by leveraging multiple temporal gradients.

Federated Learning Privacy Preserving

A Meta-learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated Learning

no code implementations28 May 2023 Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang

Motivated by this requirement, we propose a framework that (1) formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors.

Federated Learning Meta-Learning

FedSOV: Federated Model Secure Ownership Verification with Unforgeable Signature

no code implementations10 May 2023 Wenyuan Yang, Gongxi Zhu, Yuguo Yin, Hanlin Gu, Lixin Fan, Qiang Yang, Xiaochun Cao

Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.

Federated Learning

FedZKP: Federated Model Ownership Verification with Zero-knowledge Proof

no code implementations8 May 2023 Wenyuan Yang, Yuguo Yin, Gongxi Zhu, Hanlin Gu, Lixin Fan, Xiaochun Cao, Qiang Yang

Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.

Federated Learning

Optimizing Privacy, Utility and Efficiency in Constrained Multi-Objective Federated Learning

no code implementations29 Apr 2023 Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang

Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.

Fairness Federated Learning

A Game-theoretic Framework for Privacy-preserving Federated Learning

no code implementations11 Apr 2023 Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, Qiang Yang

To address this, we propose the first game-theoretic framework that considers both FL defenders and attackers in terms of their respective payoffs, which include computational costs, FL model utilities, and privacy leakage risks.

Federated Learning Privacy Preserving

Probably Approximately Correct Federated Learning

no code implementations10 Apr 2023 Xiaojin Zhang, Anbu Huang, Lixin Fan, Kai Chen, Qiang Yang

However, existing multi-objective optimization frameworks are very time-consuming, and do not guarantee the existence of the Pareto frontier, this motivates us to seek a solution to transform the multi-objective problem into a single-objective problem because it is more efficient and easier to be solved.

Federated Learning PAC learning

FedPass: Privacy-Preserving Vertical Federated Deep Learning with Adaptive Obfuscation

no code implementations30 Jan 2023 Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang

Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.

Privacy Preserving Vertical Federated Learning

FedCut: A Spectral Analysis Framework for Reliable Detection of Byzantine Colluders

no code implementations24 Nov 2022 Hanlin Gu, Lixin Fan, Xingxing Tang, Qiang Yang

Extensive experimental results under a variety of settings justify the superiority of FedCut, which demonstrates extremely robust model performance (MP) under various attacks.

Community Detection Federated Learning

FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model

no code implementations14 Nov 2022 Shuo Shao, Wenyuan Yang, Hanlin Gu, Zhan Qin, Lixin Fan, Qiang Yang, Kui Ren

To deter such misbehavior, it is essential to establish a mechanism for verifying the ownership of the model and as well tracing its origin to the leaker among the FL participants.

Continual Learning Federated Learning

A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning

1 code implementation8 Sep 2022 Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang

We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely deployed VFL algorithms.

Privacy Preserving Vertical Federated Learning

Trading Off Privacy, Utility and Efficiency in Federated Learning

no code implementations1 Sep 2022 Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang

In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment.

Vertical Federated Learning

A Hybrid Self-Supervised Learning Framework for Vertical Federated Learning

1 code implementation18 Aug 2022 Yuanqin He, Yan Kang, Xinyuan Zhao, Jiahuan Luo, Lixin Fan, Yuxing Han, Qiang Yang

In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.

Inference Attack Representation Learning +2

FadMan: Federated Anomaly Detection across Multiple Attributed Networks

no code implementations27 May 2022 Nannan Wu, Ning Zhang, Wenjun Wang, Lixin Fan, Qiang Yang

The proposed algorithm FadMan is a vertical federated learning framework for public node aligned with many private nodes of different features, and is validated on two tasks correlated anomaly detection on multiple attributed networks and anomaly detection on an attributeless network using five real-world datasets.

Anomaly Detection Data Integration +1

No Free Lunch Theorem for Security and Utility in Federated Learning

no code implementations11 Mar 2022 Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, Qiang Yang

In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.

Federated Learning Privacy Preserving

SecureBoost+: Large Scale and High-Performance Vertical Federated Gradient Boosting Decision Tree

no code implementations21 Oct 2021 Tao Fan, Weijing Chen, Guoqiang Ma, Yan Kang, Lixin Fan, Qiang Yang

Gradient boosting decision tree (GBDT) is an ensemble machine learning algorithm, which is widely used in industry, due to its good performance and easy interpretation.

Privacy Preserving Vertical Federated Learning

FedIPR: Ownership Verification for Federated Deep Neural Network Models

1 code implementation27 Sep 2021 Bowen Li, Lixin Fan, Hanlin Gu, Jie Li, Qiang Yang

To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i. e., FedIPR.

Federated Learning

Federated Deep Learning with Bayesian Privacy

no code implementations27 Sep 2021 Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang

To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.

Federated Learning Image Classification +1

ICDAR 2021 Competition on Integrated Circuit Text Spotting and Aesthetic Assessment

1 code implementation12 Jul 2021 Chun Chet Ng, Akmalul Khairi Bin Nazaruddin, Yeong Khang Lee, Xinyu Wang, Yuliang Liu, Chee Seng Chan, Lianwen Jin, Yipeng Sun, Lixin Fan

With hundreds of thousands of electronic chip components are being manufactured every day, chip manufacturers have seen an increasing demand in seeking a more efficient and effective way of inspecting the quality of printed texts on chip components.

Text Spotting

Protecting Intellectual Property of Generative Adversarial Networks From Ambiguity Attacks

no code implementations CVPR 2021 Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang

Ever since Machine Learning as a Service emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties.

Image Generation Image Super-Resolution +1

Ternary Hashing

no code implementations16 Mar 2021 Chang Liu, Lixin Fan, Kam Woh Ng, Yilun Jin, Ce Ju, Tianyu Zhang, Chee Seng Chan, Qiang Yang

This paper proposes a novel ternary hash encoding for learning to hash methods, which provides a principled more efficient coding scheme with performances better than those of the state-of-the-art binary hashing counterparts.

Retrieval

Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack

1 code implementation8 Feb 2021 Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang

Ever since Machine Learning as a Service (MLaaS) emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties.

Image Generation Image Super-Resolution +1

Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness

no code implementations27 Nov 2020 Yilun Jin, Lixin Fan, Kam Woh Ng, Ce Ju, Qiang Yang

Deep neural networks (DNNs) are known to be prone to adversarial attacks, for which many remedies are proposed.

Protect, Show, Attend and Tell: Empowering Image Captioning Models with Ownership Protection

1 code implementation25 Aug 2020 Jian Han Lim, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang

By and large, existing Intellectual Property (IP) protection on deep neural networks typically i) focus on image classification task only, and ii) follow a standard digital watermarking framework that was conventionally used to protect the ownership of multimedia and video content.

Image Captioning Image Classification

Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks

1 code implementation NeurIPS 2019 Lixin Fan, Kam Woh Ng, Chee Seng Chan

With substantial amount of time, resources and human (team) efforts invested to explore and develop successful deep neural networks (DNN), there emerges an urgent need to protect these inventions from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners.

L2RS: A Learning-to-Rescore Mechanism for Automatic Speech Recognition

no code implementations25 Oct 2019 Yuanfeng Song, Di Jiang, Xuefang Zhao, Qian Xu, Raymond Chi-Wing Wong, Lixin Fan, Qiang Yang

Modern Automatic Speech Recognition (ASR) systems primarily rely on scores from an Acoustic Model (AM) and a Language Model (LM) to rescore the N-best lists.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

[Extended version] Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks

2 code implementations16 Sep 2019 Lixin Fan, Kam Woh Ng, Chee Seng Chan

With substantial amount of time, resources and human (team) efforts invested to explore and develop successful deep neural networks (DNN), there emerges an urgent need to protect these inventions from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners.

Simultaneously Learning Architectures and Features of Deep Neural Networks

no code implementations11 Jun 2019 Tinghuai Wang, Lixin Fan, Huiling Wang

This paper presents a novel method which simultaneously learns the number of filters and network features repeatedly over multiple epochs.

Audio Classification Diversity +3

Digital Passport: A Novel Technological Strategy for Intellectual Property Protection of Convolutional Neural Networks

no code implementations10 May 2019 Lixin Fan, KamWoh Ng, Chee Seng Chan

In order to prevent deep neural networks from being infringed by unauthorized parties, we propose a generic solution which embeds a designated digital passport into a network, and subsequently, either paralyzes the network functionalities for unauthorized usages or maintain its functionalities in the presence of a verified passport.

General Classification valid

Fast Registration for cross-source point clouds by using weak regional affinity and pixel-wise refinement

no code implementations11 Mar 2019 Xiaoshui Huang, Lixin Fan, Qiang Wu, Jian Zhang, Chun Yuan

Accurate and fast registration of cross-source 3D point clouds from different sensors is an emerged research problem in computer vision.

Point Cloud Registration

Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

no code implementations25 Jan 2019 Quanshi Zhang, Lixin Fan, Bolei Zhou

This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

A Universal Logic Operator for Interpretable Deep Convolution Networks

no code implementations20 Jan 2019 KamWoh Ng, Lixin Fan, Chee Seng Chan

Explaining neural network computation in terms of probabilistic/fuzzy logical operations has attracted much attention due to its simplicity and high interpretability.

Object Detection in Equirectangular Panorama

1 code implementation21 May 2018 Wenyan Yang, Yanlin Qian, Francesco Cricri, Lixin Fan, Joni-Kristian Kamarainen

We introduced a high-resolution equirectangular panorama (360-degree, virtual reality) dataset for object detection and propose a multi-projection variant of YOLO detector.

Object object-detection +1

Depth Masked Discriminative Correlation Filter

no code implementations26 Feb 2018 Uğur Kart, Joni-Kristian Kämäräinen, Jiří Matas, Lixin Fan, Francesco Cricri

Depth information provides a strong cue for occlusion detection and handling, but has been largely omitted in generic object tracking until recently due to lack of suitable benchmark datasets and applications.

Object Tracking

A Theoretical Investigation of Graph Degree as an Unsupervised Normality Measure

no code implementations24 Jan 2018 Caglar Aytekin, Francesco Cricri, Lixin Fan, Emre Aksu

In order to have an in-depth theoretical understanding, in this manuscript, we investigate the graph degree in spectral graph clustering based and kernel based point of views and draw connections to a recent kernel method for the two sample problem.

Clustering Graph Clustering +2

Memory-Efficient Deep Salient Object Segmentation Networks on Gridized Superpixels

no code implementations27 Dec 2017 Caglar Aytekin, Xingyang Ni, Francesco Cricri, Lixin Fan, Emre Aksu

By using these encoded images, we train a memory-efficient network using only 0. 048\% of the number of parameters that other deep salient object detection networks have.

Object object-detection +5

Deep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning

no code implementations15 Nov 2017 Lixin Fan

This paper gives a rigorous analysis of trained Generalized Hamming Networks(GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i. e., stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer.

Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network

no code implementations NeurIPS 2017 Lixin Fan

We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic.

Happy Travelers Take Big Pictures: A Psychological Study with Machine Learning and Big Data

no code implementations22 Sep 2017 Xuefeng Liang, Lixin Fan, Yuen Peng Loh, Yang Liu, Song Tong

In psychology, theory-driven researches are usually conducted with extensive laboratory experiments, yet rarely tested or disproved with big data.

BIG-bench Machine Learning

A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds

no code implementations24 Oct 2016 Xiaoshui Huang, Jian Zhang, Qiang Wu, Lixin Fan, Chun Yuan

In this paper, different from previous ICP-based methods, and from a statistic view, we propose a effective coarse-to-fine algorithm to detect and register a small scale SFM point cloud in a large scale Lidar point cloud.

Cannot find the paper you are looking for? You can Submit a new open access paper.