no code implementations • 25 Mar 2024 • Qin Tian, Wenjun Wang, Chen Zhao, Minglai Shao, Wang Zhang, Dong Li
Traditional machine learning methods heavily rely on the independent and identically distribution assumption, which imposes limitations when the test distribution deviates from the training distribution.
1 code implementation • 25 Mar 2024 • Zirui Yuan, Minglai Shao, Zhiqian Chen
In this problem, the seed set is a combination of influential users and information.
no code implementations • 2 Feb 2024 • Yujie Lin, Dong Li, Chen Zhao, Xintao Wu, Qin Tian, Minglai Shao
Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains.
1 code implementation • 23 Oct 2023 • Yuanjun Shi, Linzhi Wu, Minglai Shao
In practice, these dominant pipeline models may be limited in computational efficiency and generalization capacity because of non-parallel inference and context-free discrete label embeddings.
no code implementations • 22 Sep 2023 • Yujie Lin, Chen Zhao, Minglai Shao, Baoluo Meng, Xujiang Zhao, Haifeng Chen
This approach effectively separates environmental information and sensitive attributes from the embedded representation of classification features.
no code implementations • 31 Aug 2023 • Yujie Lin, Chen Zhao, Minglai Shao, Xujiang Zhao, Haifeng Chen
In aligning p with p*, several factors can affect the adaptation rate, including the causal dependencies between variables in p. In real-life scenarios, however, we have to consider the fairness of the training process, and it is particularly crucial to involve a sensitive variable (bias) present between a cause and an effect variable.
no code implementations • 31 Aug 2023 • Dong Li, Wenjun Wang, Minglai Shao, Chen Zhao
As the basic element of graph-structured data, node has been recognized as the main object of study in graph representation learning.
no code implementations • 25 Nov 2017 • Shuai Zhang, Jian-Xin Li, Pengtao Xie, Yingchun Zhang, Minglai Shao, Haoyi Zhou, Mengyi Yan
Similar to DNNs, a SKN is composed of multiple layers of hidden units, but each parameterized by a RKHS function rather than a finite-dimensional vector.