no code implementations • 19 Nov 2024 • Qin Tian, Chen Zhao, Minglai Shao, Wenjun Wang, Yujie Lin, Dong Li
Subsequently, a representation learner is designed to disentangle domain-invariant semantic and domain-specific variation information in node embedding by leveraging causal reasoning for semantic identification, further enhancing generalization.
no code implementations • 23 Oct 2024 • Zhixia He, Chen Zhao, Minglai Shao, Yujie Lin, Dong Li, Qin Tian
In this work, we address both types of shifts simultaneously and introduce a novel challenge for OOD detection on graphs: graph-level semantic OOD detection under covariate shift.
no code implementations • 18 Aug 2024 • Dong Li, Chen Zhao, Minglai Shao, Wenjun Wang
Achieving the generalization of an invariant classifier from training domains to shifted test domains while simultaneously considering model fairness is a substantial and complex challenge in machine learning.
no code implementations • 10 Jul 2024 • Qiyao Peng, Hongtao Liu, Hongyan Xu, Qing Yang, Minglai Shao, Wenjun Wang
Finally, we feed the prompt text into LLMs, and use Supervised Fine-Tuning (SFT) to make the model generate personalized reviews for the given user and target item.
no code implementations • 13 Jun 2024 • Yujie Lin, Dong Li, Chen Zhao, Minglai Shao
Traditional methods for addressing fairness have failed in domain generalization due to their lack of consideration for distribution shifts.
no code implementations • 25 Mar 2024 • Qin Tian, Wenjun Wang, Chen Zhao, Minglai Shao, Wang Zhang, Dong Li
Traditional machine learning methods heavily rely on the independent and identically distribution assumption, which imposes limitations when the test distribution deviates from the training distribution.
1 code implementation • 25 Mar 2024 • Zirui Yuan, Minglai Shao, Zhiqian Chen
In this problem, the seed set is a combination of influential users and information.
no code implementations • 2 Feb 2024 • Minglai Shao, Dong Li, Chen Zhao, Xintao Wu, Yujie Lin, Qin Tian
Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains.
1 code implementation • 23 Oct 2023 • Yuanjun Shi, Linzhi Wu, Minglai Shao
In practice, these dominant pipeline models may be limited in computational efficiency and generalization capacity because of non-parallel inference and context-free discrete label embeddings.
no code implementations • 22 Sep 2023 • Yujie Lin, Chen Zhao, Minglai Shao, Baoluo Meng, Xujiang Zhao, Haifeng Chen
This approach effectively separates environmental information and sensitive attributes from the embedded representation of classification features.
no code implementations • 31 Aug 2023 • Yujie Lin, Chen Zhao, Minglai Shao, Xujiang Zhao, Haifeng Chen
In aligning p with p*, several factors can affect the adaptation rate, including the causal dependencies between variables in p. In real-life scenarios, however, we have to consider the fairness of the training process, and it is particularly crucial to involve a sensitive variable (bias) present between a cause and an effect variable.
no code implementations • 31 Aug 2023 • Dong Li, Wenjun Wang, Minglai Shao, Chen Zhao
As the basic element of graph-structured data, node has been recognized as the main object of study in graph representation learning.
no code implementations • 25 Nov 2017 • Shuai Zhang, Jian-Xin Li, Pengtao Xie, Yingchun Zhang, Minglai Shao, Haoyi Zhou, Mengyi Yan
Similar to DNNs, a SKN is composed of multiple layers of hidden units, but each parameterized by a RKHS function rather than a finite-dimensional vector.