1 code implementation • 17 Apr 2022 • Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui
Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.
no code implementations • 22 Feb 2022 • Han Yu, Akane Sano
We first applied data augmentation techniques on the physiological and behavioral data to improve the robustness of supervised stress detection models.
1 code implementation • 16 Feb 2022 • Huiyuan Yang, Han Yu, Kusha Sridhar, Thomas Vaessen, Inez Myin-Germeys, Akane Sano
For example, although combining bio-signals from multiple sensors (i. e., a chest pad sensor and a wrist wearable sensor) has been proved effective for improved performance, wearing multiple devices might be impractical in the free-living context.
no code implementations • 15 Feb 2022 • Rui Liu, Han Yu
We propose a unique 3-tiered taxonomy of the FedGNNs literature to provide a clear view into how GNNs work in the context of Federated Learning (FL).
no code implementations • 15 Feb 2022 • Yanci Zhang, Han Yu
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
no code implementations • 31 Jan 2022 • Shenglai Zeng, Zonghang Li, Hongfang Yu, Yihong He, Zenglin Xu, Dusit Niyato, Han Yu
In this paper, we propose a data heterogeneity-robust FL approach, FedGSP, to address this challenge by leveraging on a novel concept of dynamic Sequential-to-Parallel (STP) collaborative training.
no code implementations • 3 Jan 2022 • Yuxin Zhang, Jindong Wang, Yiqiang Chen, Han Yu, Tao Qin
In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection.
no code implementations • 16 Dec 2021 • Xiaojie Guo, Shugen Wang, Hanqing Zhao, Shiliang Diao, Jiajia Chen, Zhuoye Ding, Zhen He, Yun Xiao, Bo Long, Han Yu, Lingfei Wu
In addition, this kind of product description should be eye-catching to the readers.
no code implementations • 15 Dec 2021 • Xueying Zhang, Yanyan Zou, Hainan Zhang, Jing Zhou, Shiliang Diao, Jiajia Chen, Zhuoye Ding, Zhen He, Xueqi He, Yun Xiao, Bo Long, Han Yu, Lingfei Wu
It consists of two main components: 1) natural language generation, which is built from a transformer-pointer network and a pre-trained sequence-to-sequence model based on millions of training data from our in-house platform; and 2) copywriting quality control, which is based on both automatic evaluation and human screening.
no code implementations • 2 Nov 2021 • Yuxin Shi, Han Yu, Cyril Leung
Recent advances in Federated Learning (FL) have brought large-scale machine learning opportunities for massive distributed clients with performance and data privacy guarantees.
no code implementations • 26 Oct 2021 • Xiaohu Wu, Han Yu
A key unaddressed scenario is that these FL participants are in a competitive market, where market shares represent their competitiveness.
1 code implementation • 5 Sep 2021 • Zelei Liu, YuanYuan Chen, Han Yu, Yang Liu, Lizhen Cui
In addition, we design a guided Monte Carlo sampling approach combined with within-round and between-round truncation to further reduce the number of model reconstructions and evaluations required, through extensive experiments under diverse realistic data distribution settings.
no code implementations • 31 Aug 2021 • Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui
Classic machine learning methods are built on the $i. i. d.$ assumption that training and testing data are independent and identically distributed.
no code implementations • 22 Aug 2021 • Sone Kyaw Pye, Han Yu
data, DP, and RA.
no code implementations • 3 Aug 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks.
1 code implementation • 19 Jul 2021 • Han Yu, Thomas Vaessen, Inez Myin-Germeys, Akane Sano
Compared to the baseline method using the samples with complete modalities, the performance of the MFN improved by 1. 6% in f1-scores.
1 code implementation • 22 Jun 2021 • Han Yu, Asami Itoh, Ryota Sakamoto, Motomu Shimaoka, Akane Sano
According to the differences in self-reported health and wellbeing labels between nurses and doctors, and the correlations among their labels, we proposed a job-role based multitask and multilabel deep learning model, where we modeled physiological and behavioral data for nurses and doctors simultaneously to predict participants' next day's multidimensional self-reported health and wellbeing status.
1 code implementation • NAACL 2021 • Xu Guo, Boyang Li, Han Yu, Chunyan Miao
The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality.
1 code implementation • CVPR 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models.
no code implementations • 1 Mar 2021 • Alysa Ziying Tan, Han Yu, Lizhen Cui, Qiang Yang
In parallel with the rapid adoption of Artificial Intelligence (AI) empowered by advances in AI research, there have been growing awareness and concerns of data privacy.
1 code implementation • 4 Feb 2021 • YuanYuan Chen, Boyang Li, Han Yu, Pengcheng Wu, Chunyan Miao
the weights of training data, HYDRA assesses the contribution of training data toward test data points throughout the training trajectory.
no code implementations • 7 Dec 2020 • Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu
Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.
no code implementations • 3 Dec 2020 • Xu Guo, Han Yu, Boyang Li, Hao Wang, Pengwei Xing, Siwei Feng, Zaiqing Nie, Chunyan Miao
In this paper, we propose the FedHumor approach for the recognition of humorous content in a personalized manner through Federated Learning (FL).
no code implementations • 6 Nov 2020 • Leye Wang, Han Yu, Xiao Han
In particular, we first propose a federated crowdsensing framework, which analyzes the privacy concerns of each crowdsensing stage (i. e., task creation, task assignment, task execution, and data aggregation) and discuss how federated learning techniques may take effect.
no code implementations • 15 Aug 2020 • Mingshu Cong, Han Yu, Xi Weng, Jiabao Qu, Yang Liu, Siu Ming Yiu
In order to build an ecosystem for FL to operate in a sustainable manner, it has to be economically attractive to data owners.
Computer Science and Game Theory
1 code implementation • 3 Aug 2020 • Han Yu, Alan D. Hutson
In general, there is common misconception that the tests about $\rho_s=0$ are robust to deviations from bivariate normality.
Methodology Applications
1 code implementation • 2 Aug 2020 • Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue Wang, Meiqin Wang
Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system.
no code implementations • 15 Jun 2020 • Ce Ju, Ruihui Zhao, Jichao Sun, Xiguang Wei, Bo Zhao, Yang Liu, Hongshan Li, Tianjian Chen, Xinwei Zhang, Dashan Gao, Ben Tan, Han Yu, Chuning He, Yuan Jin
It adopts federated averaging during the model training process, without patient data being taken out of the hospitals during the whole process of model training and forecasting.
no code implementations • 14 Jun 2020 • Shangwei Guo, Tianwei Zhang, Guowen Xu, Han Yu, Tao Xiang, Yang Liu
In this paper, we design Top-DP, a novel solution to optimize the differential privacy protection of decentralized image classification systems.
no code implementations • 25 Mar 2020 • Guangda Huzhang, Zhen-Jia Pang, Yongqing Gao, Yawen Liu, Weijie Shen, Wen-Ji Zhou, Qing Da, An-Xiang Zeng, Han Yu, Yang Yu, Zhi-Hua Zhou
The framework consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning, and a discriminator that ensures the generalization of the evaluator.
no code implementations • 4 Mar 2020 • Lingjuan Lyu, Han Yu, Qiang Yang
It is thus of paramount importance to make FL system designers to be aware of the implications of future FL algorithm design on privacy-preservation.
1 code implementation • 26 Feb 2020 • Yuan Liu, Shuai Sun, Zhengpeng Ai, Shuangfeng Zhang, Zelei Liu, Han Yu
In FedCoin, blockchain consensus entities calculate SVs and a new block is created based on the proof of Shapley (PoSap) protocol.
no code implementations • 20 Feb 2020 • Shangwei Guo, Tianwei Zhang, Han Yu, Xiaofei Xie, Lei Ma, Tao Xiang, Yang Liu
It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes.
no code implementations • 30 Jan 2020 • Siwei Feng, Han Yu
Federated learning (FL) is a privacy-preserving paradigm for training collective machine learning models with locally stored data from multiple participants.
1 code implementation • 29 Jan 2020 • Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen, Zhiqi Shen
It maintains a small set of benchmark samples on the FL server and quantifies the credibility of the client local data without directly observing them by computing the mutual cross-entropy between performance of the FL model on the local datasets and that of the client local FL model on the benchmark dataset.
no code implementations • 17 Jan 2020 • Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, YuanYuan Chen, Lican Feng, Tianjian Chen, Han Yu, Qiang Yang
Federated learning (FL) is a promising approach to resolve this challenge.
7 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
no code implementations • 27 Nov 2019 • Jun Zhao, Teng Wang, Tao Bai, Kwok-Yan Lam, Zhiying Xu, Shuyu Shi, Xuebin Ren, Xinyu Yang, Yang Liu, Han Yu
Although both classical Gaussian mechanisms [1, 2] assume $0 < \epsilon \leq 1$, our review finds that many studies in the literature have used the classical Gaussian mechanisms under values of $\epsilon$ and $\delta$ where the added noise amounts of [1, 2] do not achieve $(\epsilon,\delta)$-DP.
1 code implementation • 17 Sep 2019 • Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang
Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.
Ranked #5 on
Domain Adaptation
on ImageCLEF-DA
no code implementations • 30 Aug 2019 • Chang Liu, Yi Dong, Han Yu, Zhiqi Shen, Zhanning Gao, Pan Wang, Changgong Zhang, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Video contents have become a critical tool for promoting products in E-commerce.
no code implementations • 4 Jun 2019 • Teng Wang, Jun Zhao, Han Yu, Jinyan Liu, Xinyu Yang, Xuebin Ren, Shuyu Shi
To investigate such ethical dilemmas, recent studies have adopted preference aggregation, in which each voter expresses her/his preferences over decisions for the possible ethical dilemma scenarios, and a centralized system aggregates these preferences to obtain the winning decision.
1 code implementation • 4 Jun 2019 • Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, Kee Siong Ng
This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.
no code implementations • 16 May 2019 • Jiawen Kang, Zehui Xiong, Dusit Niyato, Han Yu, Ying-Chang Liang, Dong In Kim
To strengthen data privacy and security, federated learning as an emerging machine learning technique is proposed to enable large-scale nodes, e. g., mobile devices, to distributedly train and globally share models without revealing their local data.
1 code implementation • 2 Apr 2019 • Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang
In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.
Ranked #4 on
Transfer Learning
on Office-Home
no code implementations • 2 Jan 2019 • Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel, Cyril Leung
In order to enable workforce management systems to follow the IEEE Ethically Aligned Design guidelines to prioritize worker wellbeing, we propose a distributed Computational Productive Laziness (CPL) approach in this paper.
no code implementations • 7 Dec 2018 • Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R. Lesser, Qiang Yang
As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination.
no code implementations • 5 Aug 2018 • Siwei Feng, Han Yu, Marco F. Duarte
In this paper, we propose a metric for the relevance between a source sample and the target samples.
1 code implementation • 19 Jul 2018 • Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S. Yu
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
Ranked #1 on
Domain Adaptation
on Office-Caltech-10
no code implementations • 26 Jun 2018 • Yiqiang Chen, Jindong Wang, Meiyu Huang, Han Yu
STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer.
no code implementations • CVPR 2017 • Si Liu, Changhu Wang, Ruihe Qian, Han Yu, Renda Bao
In this paper, we develop a Single frame Video Parsing (SVP) method which requires only one labeled frame per video in training stage.
no code implementations • 26 Jan 2016 • Simon Fauvel, Han Yu
In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem.