no code implementations • 27 Dec 2017 • Caglar Aytekin, Xingyang Ni, Francesco Cricri, Lixin Fan, Emre Aksu
By using these encoded images, we train a memory-efficient network using only 0. 048\% of the number of parameters that other deep salient object detection networks have.
no code implementations • 26 Feb 2018 • Uğur Kart, Joni-Kristian Kämäräinen, Jiří Matas, Lixin Fan, Francesco Cricri
Depth information provides a strong cue for occlusion detection and handling, but has been largely omitted in generic object tracking until recently due to lack of suitable benchmark datasets and applications.
no code implementations • 24 Jan 2018 • Caglar Aytekin, Francesco Cricri, Lixin Fan, Emre Aksu
In order to have an in-depth theoretical understanding, in this manuscript, we investigate the graph degree in spectral graph clustering based and kernel based point of views and draw connections to a recent kernel method for the two sample problem.
no code implementations • 15 Nov 2017 • Lixin Fan
This paper gives a rigorous analysis of trained Generalized Hamming Networks(GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i. e., stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer.
no code implementations • NeurIPS 2017 • Lixin Fan
We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic.
no code implementations • 22 Sep 2017 • Xuefeng Liang, Lixin Fan, Yuen Peng Loh, Yang Liu, Song Tong
In psychology, theory-driven researches are usually conducted with extensive laboratory experiments, yet rarely tested or disproved with big data.
no code implementations • 24 Oct 2016 • Xiaoshui Huang, Jian Zhang, Qiang Wu, Lixin Fan, Chun Yuan
In this paper, different from previous ICP-based methods, and from a statistic view, we propose a effective coarse-to-fine algorithm to detect and register a small scale SFM point cloud in a large scale Lidar point cloud.
no code implementations • 18 Aug 2016 • Xiaoshui Huang, Jian Zhang, Lixin Fan, Qiang Wu, Chun Yuan
We propose a systematic approach for registering cross-source point clouds.
no code implementations • 20 Jan 2019 • KamWoh Ng, Lixin Fan, Chee Seng Chan
Explaining neural network computation in terms of probabilistic/fuzzy logical operations has attracted much attention due to its simplicity and high interpretability.
no code implementations • 25 Jan 2019 • Quanshi Zhang, Lixin Fan, Bolei Zhou
This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning
no code implementations • 11 Mar 2019 • Xiaoshui Huang, Lixin Fan, Qiang Wu, Jian Zhang, Chun Yuan
Accurate and fast registration of cross-source 3D point clouds from different sensors is an emerged research problem in computer vision.
no code implementations • 10 May 2019 • Lixin Fan, KamWoh Ng, Chee Seng Chan
In order to prevent deep neural networks from being infringed by unauthorized parties, we propose a generic solution which embeds a designated digital passport into a network, and subsequently, either paralyzes the network functionalities for unauthorized usages or maintain its functionalities in the presence of a verified passport.
no code implementations • 11 Jun 2019 • Tinghuai Wang, Lixin Fan, Huiling Wang
This paper presents a novel method which simultaneously learns the number of filters and network features repeatedly over multiple epochs.
no code implementations • 25 Oct 2019 • Yuanfeng Song, Di Jiang, Xuefang Zhao, Qian Xu, Raymond Chi-Wing Wong, Lixin Fan, Qiang Yang
Modern Automatic Speech Recognition (ASR) systems primarily rely on scores from an Acoustic Model (AM) and a Language Model (LM) to rescore the N-best lists.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 20 Jun 2020 • Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks.
no code implementations • 27 Nov 2020 • Yilun Jin, Lixin Fan, Kam Woh Ng, Ce Ju, Qiang Yang
Deep neural networks (DNNs) are known to be prone to adversarial attacks, for which many remedies are proposed.
no code implementations • 16 Mar 2021 • Chang Liu, Lixin Fan, Kam Woh Ng, Yilun Jin, Ce Ju, Tianyu Zhang, Chee Seng Chan, Qiang Yang
This paper proposes a novel ternary hash encoding for learning to hash methods, which provides a principled more efficient coding scheme with performances better than those of the state-of-the-art binary hashing counterparts.
no code implementations • CVPR 2021 • Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
Ever since Machine Learning as a Service emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties.
no code implementations • 16 Jul 2021 • Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
no code implementations • 27 Sep 2021 • Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang
To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.
no code implementations • 11 Mar 2022 • Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, Qiang Yang
In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.
no code implementations • 27 May 2022 • Nannan Wu, Ning Zhang, Wenjun Wang, Lixin Fan, Qiang Yang
The proposed algorithm FadMan is a vertical federated learning framework for public node aligned with many private nodes of different features, and is validated on two tasks correlated anomaly detection on multiple attributed networks and anomaly detection on an attributeless network using five real-world datasets.
no code implementations • 1 Sep 2022 • Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang
In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment.
no code implementations • 8 Sep 2022 • Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang
We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely-deployed VFL algorithms.
no code implementations • 14 Nov 2022 • Shuo Shao, Wenyuan Yang, Hanlin Gu, Zhan Qin, Lixin Fan, Qiang Yang, Kui Ren
To deter such misbehavior, it is essential to establish a mechanism for verifying the ownership of the model and as well tracing its origin to the leaker among the FL participants.
no code implementations • 24 Nov 2022 • Hanlin Gu, Lixin Fan, Xingxing Tang, Qiang Yang
Extensive experimental results under a variety of settings justify the superiority of FedCut, which demonstrates extremely robust model performance (MP) under various attacks.
no code implementations • 30 Jan 2023 • Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang
Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.
no code implementations • 10 Apr 2023 • Xiaojin Zhang, Anbu Huang, Lixin Fan, Kai Chen, Qiang Yang
However, existing multi-objective optimization frameworks are very time-consuming, and do not guarantee the existence of the Pareto frontier, this motivates us to seek a solution to transform the multi-objective problem into a single-objective problem because it is more efficient and easier to be solved.
no code implementations • 11 Apr 2023 • Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, Qiang Yang
To address this, we propose the first game-theoretic framework that considers both FL defenders and attackers in terms of their respective payoffs, which include computational costs, FL model utilities, and privacy leakage risks.
no code implementations • 29 Apr 2023 • Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang
Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.
no code implementations • 8 May 2023 • Wenyuan Yang, Yuguo Yin, Gongxi Zhu, Hanlin Gu, Lixin Fan, Xiaochun Cao, Qiang Yang
Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
no code implementations • 10 May 2023 • Wenyuan Yang, Gongxi Zhu, Yuguo Yin, Hanlin Gu, Lixin Fan, Qiang Yang, Xiaochun Cao
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
no code implementations • 28 May 2023 • Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang
Motivated by this requirement, we propose a framework that (1) formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors.
no code implementations • 13 Jun 2023 • Bowen Li, Hanlin Gu, Ruoxin Chen, Jie Li, Chentao Wu, Na Ruan, Xueming Si, Lixin Fan
We investigate a Temporal Gradient Inversion Attack with a Robust Optimization framework, called TGIAs-RO, which recovers private data without any prior knowledge by leveraging multiple temporal gradients.
no code implementations • 20 Jul 2023 • Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang
To fill this gap, we propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto optimal solutions that each solution is a set of hyperparameters achieving optimal tradeoff between utility loss, training cost, and privacy leakage.
no code implementations • 24 Oct 2023 • Yuanfeng Song, Yuanqin He, Xuefang Zhao, Hanlin Gu, Di Jiang, Haijun Yang, Lixin Fan, Qiang Yang
The springing up of Large Language Models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm.
no code implementations • 29 Nov 2023 • Yan Kang, Tao Fan, Hanlin Gu, Xiaojin Zhang, Lixin Fan, Qiang Yang
Motivated by the strong growth in FTL-FM research and the potential impact of FTL-FM on industrial applications, we propose an FTL-FM framework that formulates problems of grounding FMs in the federated learning setting, construct a detailed taxonomy based on the FTL-FM framework to categorize state-of-the-art FTL-FM works, and comprehensively overview FTL-FM works based on the proposed taxonomy.
no code implementations • 27 Dec 2023 • Hanlin Gu, Xinyuan Zhao, Gongxi Zhu, Yuxing Han, Yan Kang, Lixin Fan, Qiang Yang
Concerns about utility, privacy, and training efficiency in FL have garnered significant research attention.
no code implementations • 22 Feb 2024 • Qi Hu, Weifeng Jiang, Haoran Li, ZiHao Wang, Jiaxin Bai, Qianren Mao, Yangqiu Song, Lixin Fan, JianXin Li
An entity can be involved in various knowledge graphs and reasoning on multiple KGs and answering complex queries on multi-source KGs is important in discovering knowledge cross graphs.
no code implementations • 6 Apr 2024 • Yan Kang, Ziyao Ren, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang
This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.
no code implementations • 18 Apr 2024 • Yuanqin He, Yan Kang, Lixin Fan, Qiang Yang
To address these issues, we propose a Federated Evaluation framework of Large Language Models, named FedEval-LLM, that provides reliable performance measurements of LLMs on downstream tasks without the reliance on labeled test sets and external tools, thus ensuring strong privacy-preserving capability.
1 code implementation • 9 Feb 2024 • Gongxi Zhu, Donghao Li, Hanlin Gu, Yuxing Han, Yuan YAO, Lixin Fan, Qiang Yang
Firstly, combining model information from multiple communication rounds (Multi-temporal) enhances the overall effectiveness of MIAs compared to utilizing model information from a single epoch.
1 code implementation • 26 Apr 2022 • Haoran Li, Yangqiu Song, Lixin Fan
To this end, we propose effective defense objectives to protect persona leakage from hidden states.
1 code implementation • NAACL 2022 • Haoran Li, Yangqiu Song, Lixin Fan
To this end, we propose effective defense objectives to protect persona leakage from hidden states.
1 code implementation • 12 Jul 2021 • Chun Chet Ng, Akmalul Khairi Bin Nazaruddin, Yeong Khang Lee, Xinyu Wang, Yuliang Liu, Chee Seng Chan, Lianwen Jin, Yipeng Sun, Lixin Fan
With hundreds of thousands of electronic chip components are being manufactured every day, chip manufacturers have seen an increasing demand in seeking a more efficient and effective way of inspecting the quality of printed texts on chip components.
1 code implementation • 25 Aug 2020 • Jian Han Lim, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
By and large, existing Intellectual Property (IP) protection on deep neural networks typically i) focus on image classification task only, and ii) follow a standard digital watermarking framework that was conventionally used to protect the ownership of multimedia and video content.
1 code implementation • 18 Aug 2022 • Yuanqin He, Yan Kang, Xinyuan Zhao, Jiahuan Luo, Lixin Fan, Yuxing Han, Qiang Yang
In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.
1 code implementation • 8 Feb 2021 • Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang
Ever since Machine Learning as a Service (MLaaS) emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties.
1 code implementation • 27 Sep 2021 • Bowen Li, Lixin Fan, Hanlin Gu, Jie Li, Qiang Yang
To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i. e., FedIPR.
1 code implementation • 29 Jan 2024 • Qing Shuai, Zhiyuan Yu, Zhize Zhou, Lixin Fan, Haijun Yang, Can Yang, Xiaowei Zhou
This paper addresses the challenging task of reconstructing the poses of multiple individuals engaged in close interactions, captured by multiple calibrated cameras.
1 code implementation • 21 May 2018 • Wenyan Yang, Yanlin Qian, Francesco Cricri, Lixin Fan, Joni-Kristian Kamarainen
We introduced a high-resolution equirectangular panorama (360-degree, virtual reality) dataset for object detection and propose a multi-projection variant of YOLO detector.
2 code implementations • 16 Sep 2019 • Lixin Fan, Kam Woh Ng, Chee Seng Chan
With substantial amount of time, resources and human (team) efforts invested to explore and develop successful deep neural networks (DNN), there emerges an urgent need to protect these inventions from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners.
1 code implementation • NeurIPS 2019 • Lixin Fan, Kam Woh Ng, Chee Seng Chan
With substantial amount of time, resources and human (team) efforts invested to explore and develop successful deep neural networks (DNN), there emerges an urgent need to protect these inventions from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners.
1 code implementation • 16 Oct 2023 • Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang
FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.