1 code implementation • NAACL 2022 • Haoran Li, Yangqiu Song, Lixin Fan
To this end, we propose effective defense objectives to protect persona leakage from hidden states.
no code implementations • 28 May 2023 • Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang
Motivated by this requirement, we propose a framework that (1) formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors.
no code implementations • 10 May 2023 • Wenyuan Yang, Gongxi Zhu, Yuguo Yin, Hanlin Gu, Lixin Fan, Qiang Yang, Xiaochun Cao
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
no code implementations • 8 May 2023 • Wenyuan Yang, Yuguo Yin, Gongxi Zhu, Hanlin Gu, Lixin Fan, Xiaochun Cao, Qiang Yang
Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
no code implementations • 29 Apr 2023 • Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang
Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.
no code implementations • 11 Apr 2023 • Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, Qiang Yang
To address this, we propose the first game-theoretic framework that considers both FL defenders and attackers in terms of their respective payoffs, which include computational costs, FL model utilities, and privacy leakage risks.
no code implementations • 10 Apr 2023 • Xiaojin Zhang, Anbu Huang, Lixin Fan, Kai Chen, Qiang Yang
However, existing multi-objective optimization frameworks are very time-consuming, and do not guarantee the existence of the Pareto frontier, this motivates us to seek a solution to transform the multi-objective problem into a single-objective problem because it is more efficient and easier to be solved.
no code implementations • 30 Jan 2023 • Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang
Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.
no code implementations • 24 Nov 2022 • Hanlin Gu, Lixin Fan, Xingxing Tang, Qiang Yang
Extensive experimental results under a variety of settings justify the superiority of FedCut, which demonstrates extremely robust model performance (MP) under various attacks.
no code implementations • 14 Nov 2022 • Shuo Shao, Wenyuan Yang, Hanlin Gu, Jian Lou, Zhan Qin, Lixin Fan, Qiang Yang, Kui Ren
Copyright protection of the Federated Learning (FL) model has become a major concern since malicious clients in FL can stealthily distribute or sell the FL model to other parties.
no code implementations • 8 Sep 2022 • Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang
We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely-deployed VFL algorithms.
no code implementations • 1 Sep 2022 • Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang
In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment.
no code implementations • 18 Aug 2022 • Yuanqin He, Yan Kang, Jiahuan Luo, Lixin Fan, Qiang Yang
The core idea of FedHSSL is to utilize cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentations) of samples within each party to improve the representation learning capability of the joint VFL model through SSL (e. g., SimSiam).
no code implementations • 27 May 2022 • Nannan Wu, Ning Zhang, Wenjun Wang, Lixin Fan, Qiang Yang
The proposed algorithm FadMan is a vertical federated learning framework for public node aligned with many private nodes of different features, and is validated on two tasks correlated anomaly detection on multiple attributed networks and anomaly detection on an attributeless network using five real-world datasets.
1 code implementation • 26 Apr 2022 • Haoran Li, Yangqiu Song, Lixin Fan
To this end, we propose effective defense objectives to protect persona leakage from hidden states.
no code implementations • 11 Mar 2022 • Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, Qiang Yang
In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.
1 code implementation • 27 Sep 2021 • Bowen Li, Lixin Fan, Hanlin Gu, Jie Li, Qiang Yang
To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i. e., FedIPR.
no code implementations • 27 Sep 2021 • Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang
To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.
no code implementations • 16 Jul 2021 • Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
1 code implementation • 12 Jul 2021 • Chun Chet Ng, Akmalul Khairi Bin Nazaruddin, Yeong Khang Lee, Xinyu Wang, Yuliang Liu, Chee Seng Chan, Lianwen Jin, Yipeng Sun, Lixin Fan
With hundreds of thousands of electronic chip components are being manufactured every day, chip manufacturers have seen an increasing demand in seeking a more efficient and effective way of inspecting the quality of printed texts on chip components.
no code implementations • CVPR 2021 • Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
Ever since Machine Learning as a Service emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties.
no code implementations • 16 Mar 2021 • Chang Liu, Lixin Fan, Kam Woh Ng, Yilun Jin, Ce Ju, Tianyu Zhang, Chee Seng Chan, Qiang Yang
This paper proposes a novel ternary hash encoding for learning to hash methods, which provides a principled more efficient coding scheme with performances better than those of the state-of-the-art binary hashing counterparts.
1 code implementation • 8 Feb 2021 • Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
Ever since Machine Learning as a Service (MLaaS) emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties.
no code implementations • 27 Nov 2020 • Yilun Jin, Lixin Fan, Kam Woh Ng, Ce Ju, Qiang Yang
Deep neural networks (DNNs) are known to be prone to adversarial attacks, for which many remedies are proposed.
1 code implementation • 25 Aug 2020 • Jian Han Lim, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
By and large, existing Intellectual Property (IP) protection on deep neural networks typically i) focus on image classification task only, and ii) follow a standard digital watermarking framework that was conventionally used to protect the ownership of multimedia and video content.
no code implementations • 20 Jun 2020 • Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks.
1 code implementation • NeurIPS 2019 • Lixin Fan, Kam Woh Ng, Chee Seng Chan
With substantial amount of time, resources and human (team) efforts invested to explore and develop successful deep neural networks (DNN), there emerges an urgent need to protect these inventions from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners.
no code implementations • 25 Oct 2019 • Yuanfeng Song, Di Jiang, Xuefang Zhao, Qian Xu, Raymond Chi-Wing Wong, Lixin Fan, Qiang Yang
Modern Automatic Speech Recognition (ASR) systems primarily rely on scores from an Acoustic Model (AM) and a Language Model (LM) to rescore the N-best lists.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
2 code implementations • 16 Sep 2019 • Lixin Fan, Kam Woh Ng, Chee Seng Chan
With substantial amount of time, resources and human (team) efforts invested to explore and develop successful deep neural networks (DNN), there emerges an urgent need to protect these inventions from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners.
no code implementations • 11 Jun 2019 • Tinghuai Wang, Lixin Fan, Huiling Wang
This paper presents a novel method which simultaneously learns the number of filters and network features repeatedly over multiple epochs.
no code implementations • 10 May 2019 • Lixin Fan, KamWoh Ng, Chee Seng Chan
In order to prevent deep neural networks from being infringed by unauthorized parties, we propose a generic solution which embeds a designated digital passport into a network, and subsequently, either paralyzes the network functionalities for unauthorized usages or maintain its functionalities in the presence of a verified passport.
no code implementations • 11 Mar 2019 • Xiaoshui Huang, Lixin Fan, Qiang Wu, Jian Zhang, Chun Yuan
Accurate and fast registration of cross-source 3D point clouds from different sensors is an emerged research problem in computer vision.
no code implementations • 25 Jan 2019 • Quanshi Zhang, Lixin Fan, Bolei Zhou
This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning
no code implementations • 20 Jan 2019 • KamWoh Ng, Lixin Fan, Chee Seng Chan
Explaining neural network computation in terms of probabilistic/fuzzy logical operations has attracted much attention due to its simplicity and high interpretability.
1 code implementation • 21 May 2018 • Wenyan Yang, Yanlin Qian, Francesco Cricri, Lixin Fan, Joni-Kristian Kamarainen
We introduced a high-resolution equirectangular panorama (360-degree, virtual reality) dataset for object detection and propose a multi-projection variant of YOLO detector.
no code implementations • 26 Feb 2018 • Uğur Kart, Joni-Kristian Kämäräinen, Jiří Matas, Lixin Fan, Francesco Cricri
Depth information provides a strong cue for occlusion detection and handling, but has been largely omitted in generic object tracking until recently due to lack of suitable benchmark datasets and applications.
no code implementations • 24 Jan 2018 • Caglar Aytekin, Francesco Cricri, Lixin Fan, Emre Aksu
In order to have an in-depth theoretical understanding, in this manuscript, we investigate the graph degree in spectral graph clustering based and kernel based point of views and draw connections to a recent kernel method for the two sample problem.
no code implementations • 27 Dec 2017 • Caglar Aytekin, Xingyang Ni, Francesco Cricri, Lixin Fan, Emre Aksu
By using these encoded images, we train a memory-efficient network using only 0. 048\% of the number of parameters that other deep salient object detection networks have.
no code implementations • 15 Nov 2017 • Lixin Fan
This paper gives a rigorous analysis of trained Generalized Hamming Networks(GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i. e., stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer.
no code implementations • NeurIPS 2017 • Lixin Fan
We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic.
no code implementations • 22 Sep 2017 • Xuefeng Liang, Lixin Fan, Yuen Peng Loh, Yang Liu, Song Tong
In psychology, theory-driven researches are usually conducted with extensive laboratory experiments, yet rarely tested or disproved with big data.
no code implementations • 24 Oct 2016 • Xiaoshui Huang, Jian Zhang, Qiang Wu, Lixin Fan, Chun Yuan
In this paper, different from previous ICP-based methods, and from a statistic view, we propose a effective coarse-to-fine algorithm to detect and register a small scale SFM point cloud in a large scale Lidar point cloud.
no code implementations • 18 Aug 2016 • Xiaoshui Huang, Jian Zhang, Lixin Fan, Qiang Wu, Chun Yuan
We propose a systematic approach for registering cross-source point clouds.