no code implementations • 22 Sep 2023 • Yizhou Chen, AnXiang Zeng, Guangda Huzhang, Qingtao Yu, Kerui Zhang, Cao Yuanpeng, Kangle Wu, Han Yu, Zhiming Zhou
However, neighbor aggregation, a critical building block of graph networks, for temporal graphs, is currently straightforwardly extended from that of static graphs.
no code implementations • 6 Sep 2023 • Zhiwei Xiong, Yunfan Zhang, Zhiqi Shen, Peiran Ren, Han Yu
Image aesthetics assessment (IAA) aims to estimate the aesthetics of images.
no code implementations • 1 Sep 2023 • Michael Santacroce, Yadong Lu, Han Yu, Yuanzhi Li, Yelong Shen
To address this issue, we present a comprehensive analysis the memory usage, performance, and training time of memory-savings techniques for PPO.
no code implementations • 29 Aug 2023 • Pengwei Xing, Songtao Lu, Han Yu
To improve interpretability and explore the balance principle between generality and personalization under a multi-domain CoT prompt selection scenario, we propose the Federated Logic rule learning approach (FedLogic).
no code implementations • 24 Aug 2023 • Yanci Zhang, Han Yu
Under LR-XFL, FL clients create local logic rules based on their local data and send them, along with model updates, to the FL server.
no code implementations • 14 Aug 2023 • Rui Liu, YuanYuan Chen, Anran Li, Yi Ding, Han Yu, Cuntai Guan
Though numerous research groups and institutes collect a multitude of EEG datasets for the same BCI task, sharing EEG data from multiple sites is still challenging due to the heterogeneity of devices.
no code implementations • 7 Aug 2023 • Yulan Gao, Zhaoxiang Hou, Chengyi Yang, Zengxiang Li, Han Yu
Federated learning (FL) addresses data privacy concerns by enabling collaborative training of AI models across distributed data owners.
no code implementations • 20 Jul 2023 • Xingxuan Zhang, Renzhe Xu, Han Yu, Yancheng Dong, Pengfei Tian, Peng Cu
However, we reveal that Adam is not necessarily the optimal choice for the majority of current DG methods and datasets.
no code implementations • 20 Jul 2023 • Yuxin Shi, Zelei Liu, Zhuan Shi, Han Yu
By not using threshold-based reputation filtering, it provides FL clients with opportunities to redeem their reputations after a perceived poor performance, thereby further enhancing fair client treatment.
no code implementations • 1 Jul 2023 • Has Sun, Xiaoli Tang, Chengyi Yang, Zhenpeng Yu, Xiuli Wang, Qijie Ding, Zengxiang Li, Han Yu
Federated learning (FL) offers a solution to this problem by enabling local data processing on each participant, such as gas companies and heating stations.
no code implementations • 26 Jun 2023 • Ziqiang Ye, Yulan Gao, Yue Xiao, Minrui Xu, Han Yu, Dusit Niyato
We develop cost-effective designs for both task offloading mode selection and resource allocation, subject to the individual link latency constraint guarantees for mobile devices, while satisfying the required success ratio for their computation tasks.
no code implementations • 16 Jun 2023 • Chao Ren, Han Yu, Rudai Yan, Minrui Xu, Yuan Shen, Huihui Zhu, Dusit Niyato, Zhao Yang Dong, Leong Chuan Kwek
This review serves as a first-of-its-kind comprehensive guide for researchers and practitioners interested in understanding and advancing the field of QFL.
no code implementations • 25 May 2023 • Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun Zhou, Furui Liu
Moreover, we propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
no code implementations • 24 May 2023 • Han Yu, Xingxuan Zhang, Renzhe Xu, Jiashuo Liu, Yue He, Peng Cui
Domain generalization aims to solve the challenge of Out-of-Distribution (OOD) generalization by leveraging common knowledge learned from multiple training domains to generalize to unseen test domains.
no code implementations • 11 May 2023 • Yulan Gao, Yansong Zhao, Han Yu
However, the problem of optimizing FL client selection in mobile federated learning networks (MFLNs), where devices move in and out of each others' coverage and no FL server knows all the data owners, remains open.
no code implementations • 11 May 2023 • Xiaoli Tang, Han Yu
However, this assumption is not realistic in practical AFL marketplaces in which multiple data consumers can compete to attract data owners to join their respective FL tasks.
no code implementations • 19 Apr 2023 • Yu Guo, Ryan Wen Liu, Jiangtian Nie, Lingjuan Lyu, Zehui Xiong, Jiawen Kang, Han Yu, Dusit Niyato
To eliminate the influences of adverse weather conditions, we propose a dual attention and dual frequency-guided dehazing network (termed DADFNet) for real-time visibility enhancement.
1 code implementation • 23 Mar 2023 • Liping Yi, Gang Wang, Xiaoguang Liu, Zhuan Shi, Han Yu
It is a communication and computation-efficient model-heterogeneous FL framework which trains a shared generalized global prediction header with representations extracted by heterogeneous extractors for clients' models at the FL server.
1 code implementation • CVPR 2023 • Xingxuan Zhang, Renzhe Xu, Han Yu, Hao Zou, Peng Cui
Yet the current definition of flatness discussed in SAM and its follow-ups are limited to the zeroth-order flatness (i. e., the worst-case loss within a perturbation radius).
no code implementations • 27 Feb 2023 • Anran Li, Rui Liu, Ming Hu, Luu Anh Tuan, Han Yu
Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.
no code implementations • 22 Feb 2023 • YuanYuan Chen, Zichen Chen, Sheng Guo, Yansong Zhao, Zelei Liu, Pengcheng Wu, Chengyi Yang, Zengxiang Li, Han Yu
Artificial intelligence (AI)-empowered industrial fault diagnostics is important in ensuring the safe operation of industrial applications.
no code implementations • 21 Feb 2023 • Anran Li, Hongyi Peng, Lan Zhang, Jiahui Huang, Qing Guo, Han Yu, Yang Liu
Vertical Federated Learning (VFL) enables multiple data owners, each holding a different subset of features about largely overlapping sets of data sample(s), to jointly train a useful global model.
no code implementations • 17 Feb 2023 • Zhaoyang Cao, Han Yu, Huiyuan Yang, Akane Sano
Due to individual heterogeneity, person-specific models are usually achieving better performance than generic (one-size-fits-all) models in data-driven health applications.
no code implementations • 3 Feb 2023 • Yizhou Chen, Guangda Huzhang, AnXiang Zeng, Qingtao Yu, Hui Sun, Heng-yi Li, Jingyi Li, Yabo Ni, Han Yu, Zhiming Zhou
However, such a method has two important limitations in real-world applications: 1) it is hard to learn embeddings that generalize well for users and items with rare interactions on their own; and 2) it may incur unbearably high memory costs when the number of users and items scales up.
no code implementations • 23 Jan 2023 • Dengyu Wu, Gaojie Jin, Han Yu, Xinping Yi, Xiaowei Huang
Two novel optimisation techniques are presented to achieve AOI-SNNs: a regularisation and a cutoff.
no code implementations • 28 Dec 2022 • Shipeng Wang, Qingzhong Li, Lizhen Cui, Zhongmin Yan, Yonghui Xu, Zhuan Shi, Xinping Min, Zhiqi Shen, Han Yu
Crowdsourcing, in which human intelligence and productivity is dynamically mobilized to tackle tasks too complex for automation alone to handle, has grown to be an important research topic and inspired new businesses (e. g., Uber, Airbnb).
no code implementations • 2 Dec 2022 • Han Yu, Peng Cui, Yue He, Zheyan Shen, Yong Lin, Renzhe Xu, Xingxuan Zhang
The problem of covariate-shift generalization has attracted intensive research attention.
no code implementations • 21 Nov 2022 • Zhaoyang Cao, Han Yu, Huiyuan Yang, Akane Sano
Due to individual heterogeneity, performance gaps are observed between generic (one-size-fits-all) models and person-specific models in data-driven health applications.
1 code implementation • 7 Nov 2022 • Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie
Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.
no code implementations • 6 Nov 2022 • Xu Guo, Han Yu
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs).
no code implementations • 1 Nov 2022 • Yulan Gao, Ziqiang Ye, Han Yu, Zehui Xiong, Yue Xiao, Dusit Niyato
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
no code implementations • 18 Oct 2022 • Jian-Yong Wang, Han Yu
Probability and conditional probability of co-occurrence are introduced by being defined in a general setting with set functions to develop a rigorous measure-theoretic foundation for the inherent challenge of data sparseness.
1 code implementation • 13 Oct 2022 • Huiyuan Yang, Han Yu, Akane Sano
As an effective technique to increase the data variability and thus train deep models with better generalization, data augmentation (DA) is a critical step for the success of deep learning models on biobehavioral time series data.
no code implementations • 13 Oct 2022 • Han Yu, Huiyuan Yang, Akane Sano
But the view-learning method is not well developed for time-series data.
1 code implementation • 6 Oct 2022 • Xu Guo, Boyang Li, Han Yu
Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks.
no code implementations • 21 Sep 2022 • Xiaoli Tang, Han Yu
As such, building trustworthy AIRTB auctioning systems has emerged as an important direction of research in this field in recent years.
1 code implementation • 10 Aug 2022 • YuanYuan Chen, Zichen Chen, Pengcheng Wu, Han Yu
To the best of our knowledge, FedOBD is the first approach to perform dropout on FL models at the block level rather than at the individual parameter level.
no code implementations • 7 Aug 2022 • Khadija Zanna, Kusha Sridhar, Han Yu, Akane Sano
However, there is still a lack of standard in evaluating bias in such machine learning models in the field, which leads to challenges in providing reliable predictions and in addressing disparities.
2 code implementations • CVPR 2023 • Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui
Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.
no code implementations • IEEE Transactions on Industrial Informatics 2022 • Ryan Wen Liu, Maohan Liang, Jiangtian Nie, Yanli Yuan, Zehui Xiong, Member, IEEE, Han Yu
—The revolutionary advances in machine learning and data mining techniques have contributed greatly to the rapid developments of maritime Internet of Things (IoT).
no code implementations • 22 Feb 2022 • Han Yu, Akane Sano
We first applied data augmentation techniques on the physiological and behavioral data to improve the robustness of supervised stress detection models.
1 code implementation • 16 Feb 2022 • Huiyuan Yang, Han Yu, Kusha Sridhar, Thomas Vaessen, Inez Myin-Germeys, Akane Sano
For example, although combining bio-signals from multiple sensors (i. e., a chest pad sensor and a wrist wearable sensor) has been proved effective for improved performance, wearing multiple devices might be impractical in the free-living context.
no code implementations • 15 Feb 2022 • Yanci Zhang, Han Yu
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
no code implementations • 15 Feb 2022 • Rui Liu, Pengwei Xing, Zichao Deng, Anran Li, Cuntai Guan, Han Yu
This has led to the rapid development of the emerging research field of federated graph neural networks (FedGNNs).
no code implementations • 31 Jan 2022 • Shenglai Zeng, Zonghang Li, Hongfang Yu, Yihong He, Zenglin Xu, Dusit Niyato, Han Yu
In this paper, we propose a data heterogeneity-robust FL approach, FedGSP, to address this challenge by leveraging on a novel concept of dynamic Sequential-to-Parallel (STP) collaborative training.
no code implementations • 3 Jan 2022 • Yuxin Zhang, Jindong Wang, Yiqiang Chen, Han Yu, Tao Qin
In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection.
no code implementations • 16 Dec 2021 • Xiaojie Guo, Shugen Wang, Hanqing Zhao, Shiliang Diao, Jiajia Chen, Zhuoye Ding, Zhen He, Yun Xiao, Bo Long, Han Yu, Lingfei Wu
In addition, this kind of product description should be eye-catching to the readers.
no code implementations • 15 Dec 2021 • Xueying Zhang, Yanyan Zou, Hainan Zhang, Jing Zhou, Shiliang Diao, Jiajia Chen, Zhuoye Ding, Zhen He, Xueqi He, Yun Xiao, Bo Long, Han Yu, Lingfei Wu
It consists of two main components: 1) natural language generation, which is built from a transformer-pointer network and a pre-trained sequence-to-sequence model based on millions of training data from our in-house platform; and 2) copywriting quality control, which is based on both automatic evaluation and human screening.
no code implementations • 2 Nov 2021 • Yuxin Shi, Han Yu, Cyril Leung
However, most current works focus on the interest of the central controller in FL, and overlook the interests of the FL clients.
no code implementations • 26 Oct 2021 • Xiaohu Wu, Han Yu
A key unaddressed scenario is that these FL participants are in a competitive market, where market shares represent their competitiveness.
1 code implementation • 5 Sep 2021 • Zelei Liu, YuanYuan Chen, Han Yu, Yang Liu, Lizhen Cui
In addition, we design a guided Monte Carlo sampling approach combined with within-round and between-round truncation to further reduce the number of model reconstructions and evaluations required, through extensive experiments under diverse realistic data distribution settings.
no code implementations • 31 Aug 2021 • Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui
This paper represents the first comprehensive, systematic review of OOD generalization, encompassing a spectrum of aspects from problem definition, methodological development, and evaluation procedures, to the implications and future directions of the field.
no code implementations • 22 Aug 2021 • Sone Kyaw Pye, Han Yu
data, DP, and RA.
no code implementations • 3 Aug 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks.
1 code implementation • 19 Jul 2021 • Han Yu, Thomas Vaessen, Inez Myin-Germeys, Akane Sano
Compared to the baseline method using the samples with complete modalities, the performance of the MFN improved by 1. 6% in f1-scores.
1 code implementation • 22 Jun 2021 • Han Yu, Asami Itoh, Ryota Sakamoto, Motomu Shimaoka, Akane Sano
According to the differences in self-reported health and wellbeing labels between nurses and doctors, and the correlations among their labels, we proposed a job-role based multitask and multilabel deep learning model, where we modeled physiological and behavioral data for nurses and doctors simultaneously to predict participants' next day's multidimensional self-reported health and wellbeing status.
1 code implementation • NAACL 2021 • Xu Guo, Boyang Li, Han Yu, Chunyan Miao
The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality.
1 code implementation • CVPR 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models.
no code implementations • 1 Mar 2021 • Alysa Ziying Tan, Han Yu, Lizhen Cui, Qiang Yang
In parallel with the rapid adoption of Artificial Intelligence (AI) empowered by advances in AI research, there have been growing awareness and concerns of data privacy.
1 code implementation • 4 Feb 2021 • YuanYuan Chen, Boyang Li, Han Yu, Pengcheng Wu, Chunyan Miao
the weights of training data, HYDRA assesses the contribution of training data toward test data points throughout the training trajectory.
no code implementations • 7 Dec 2020 • Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu
Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.
no code implementations • 3 Dec 2020 • Xu Guo, Han Yu, Boyang Li, Hao Wang, Pengwei Xing, Siwei Feng, Zaiqing Nie, Chunyan Miao
In this paper, we propose the FedHumor approach for the recognition of humorous content in a personalized manner through Federated Learning (FL).
no code implementations • 6 Nov 2020 • Leye Wang, Han Yu, Xiao Han
In particular, we first propose a federated crowdsensing framework, which analyzes the privacy concerns of each crowdsensing stage (i. e., task creation, task assignment, task execution, and data aggregation) and discuss how federated learning techniques may take effect.
no code implementations • 15 Aug 2020 • Mingshu Cong, Han Yu, Xi Weng, Jiabao Qu, Yang Liu, Siu Ming Yiu
In order to build an ecosystem for FL to operate in a sustainable manner, it has to be economically attractive to data owners.
Computer Science and Game Theory
1 code implementation • 3 Aug 2020 • Han Yu, Alan D. Hutson
In general, there is common misconception that the tests about $\rho_s=0$ are robust to deviations from bivariate normality.
Methodology Applications
1 code implementation • 2 Aug 2020 • Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue Wang, Meiqin Wang
Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system.
no code implementations • 15 Jun 2020 • Ce Ju, Ruihui Zhao, Jichao Sun, Xiguang Wei, Bo Zhao, Yang Liu, Hongshan Li, Tianjian Chen, Xinwei Zhang, Dashan Gao, Ben Tan, Han Yu, Chuning He, Yuan Jin
It adopts federated averaging during the model training process, without patient data being taken out of the hospitals during the whole process of model training and forecasting.
no code implementations • 14 Jun 2020 • Shangwei Guo, Tianwei Zhang, Guowen Xu, Han Yu, Tao Xiang, Yang Liu
In this paper, we design Top-DP, a novel solution to optimize the differential privacy protection of decentralized image classification systems.
no code implementations • 25 Mar 2020 • Guangda Huzhang, Zhen-Jia Pang, Yongqing Gao, Yawen Liu, Weijie Shen, Wen-Ji Zhou, Qing Da, An-Xiang Zeng, Han Yu, Yang Yu, Zhi-Hua Zhou
The framework consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning, and a discriminator that ensures the generalization of the evaluator.
no code implementations • 4 Mar 2020 • Lingjuan Lyu, Han Yu, Qiang Yang
It is thus of paramount importance to make FL system designers to be aware of the implications of future FL algorithm design on privacy-preservation.
1 code implementation • 26 Feb 2020 • Yuan Liu, Shuai Sun, Zhengpeng Ai, Shuangfeng Zhang, Zelei Liu, Han Yu
In FedCoin, blockchain consensus entities calculate SVs and a new block is created based on the proof of Shapley (PoSap) protocol.
no code implementations • 20 Feb 2020 • Shangwei Guo, Tianwei Zhang, Han Yu, Xiaofei Xie, Lei Ma, Tao Xiang, Yang Liu
It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes.
no code implementations • 30 Jan 2020 • Siwei Feng, Han Yu
Federated learning (FL) is a privacy-preserving paradigm for training collective machine learning models with locally stored data from multiple participants.
1 code implementation • 29 Jan 2020 • Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen, Zhiqi Shen
It maintains a small set of benchmark samples on the FL server and quantifies the credibility of the client local data without directly observing them by computing the mutual cross-entropy between performance of the FL model on the local datasets and that of the client local FL model on the benchmark dataset.
1 code implementation • 17 Jan 2020 • Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, YuanYuan Chen, Lican Feng, Tianjian Chen, Han Yu, Qiang Yang
Federated learning (FL) is a promising approach to resolve this challenge.
8 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
no code implementations • 27 Nov 2019 • Jun Zhao, Teng Wang, Tao Bai, Kwok-Yan Lam, Zhiying Xu, Shuyu Shi, Xuebin Ren, Xinyu Yang, Yang Liu, Han Yu
Although both classical Gaussian mechanisms [1, 2] assume $0 < \epsilon \leq 1$, our review finds that many studies in the literature have used the classical Gaussian mechanisms under values of $\epsilon$ and $\delta$ where the added noise amounts of [1, 2] do not achieve $(\epsilon,\delta)$-DP.
1 code implementation • 17 Sep 2019 • Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang
Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.
Ranked #6 on
Domain Adaptation
on ImageCLEF-DA
no code implementations • 30 Aug 2019 • Chang Liu, Yi Dong, Han Yu, Zhiqi Shen, Zhanning Gao, Pan Wang, Changgong Zhang, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Video contents have become a critical tool for promoting products in E-commerce.
no code implementations • 4 Jun 2019 • Teng Wang, Jun Zhao, Han Yu, Jinyan Liu, Xinyu Yang, Xuebin Ren, Shuyu Shi
To investigate such ethical dilemmas, recent studies have adopted preference aggregation, in which each voter expresses her/his preferences over decisions for the possible ethical dilemma scenarios, and a centralized system aggregates these preferences to obtain the winning decision.
1 code implementation • 4 Jun 2019 • Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, Kee Siong Ng
This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.
no code implementations • 16 May 2019 • Jiawen Kang, Zehui Xiong, Dusit Niyato, Han Yu, Ying-Chang Liang, Dong In Kim
To strengthen data privacy and security, federated learning as an emerging machine learning technique is proposed to enable large-scale nodes, e. g., mobile devices, to distributedly train and globally share models without revealing their local data.
1 code implementation • 2 Apr 2019 • Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang
In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.
Ranked #4 on
Transfer Learning
on Office-Home
no code implementations • 2 Jan 2019 • Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel, Cyril Leung
In order to enable workforce management systems to follow the IEEE Ethically Aligned Design guidelines to prioritize worker wellbeing, we propose a distributed Computational Productive Laziness (CPL) approach in this paper.
no code implementations • 7 Dec 2018 • Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R. Lesser, Qiang Yang
As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination.
no code implementations • 5 Aug 2018 • Siwei Feng, Han Yu, Marco F. Duarte
In this paper, we propose a metric for the relevance between a source sample and the target samples.
1 code implementation • 19 Jul 2018 • Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S. Yu
Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning.
Ranked #1 on
Domain Adaptation
on Office-Caltech-10
no code implementations • 26 Jun 2018 • Yiqiang Chen, Jindong Wang, Meiyu Huang, Han Yu
STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer.
no code implementations • CVPR 2017 • Si Liu, Changhu Wang, Ruihe Qian, Han Yu, Renda Bao
In this paper, we develop a Single frame Video Parsing (SVP) method which requires only one labeled frame per video in training stage.
no code implementations • 26 Jan 2016 • Simon Fauvel, Han Yu
In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem.