no code implementations • 17 May 2024 • Song Wang, Yushun Dong, Binchi Zhang, Zihan Chen, Xingbo Fu, Yinhan He, Cong Shen, Chuxu Zhang, Nitesh V. Chawla, Jundong Li
In this survey paper, we explore three critical aspects vital for enhancing safety in Graph ML: reliability, generalizability, and confidentiality.
no code implementations • 11 Mar 2024 • Chenhao Wang, Zihan Chen, Nikolaos Pappas, Howard H. Yang, Tony Q. S. Quek, H. Vincent Poor
In contrast, an Adam-like algorithm converges at the $\mathcal{O}( 1/T )$ rate, demonstrating its advantage in expediting the model training process.
1 code implementation • NeurIPS 2023 • Zihan Chen, Howard H. Yang, Tony Q. S. Quek, Kai Fong Ernest Chong
Personalized federated learning (PFL) has been widely investigated to address the challenge of data heterogeneity, especially when a single generic model is inadequate in satisfying the diverse performance requirements of local clients simultaneously.
1 code implementation • 17 Jan 2024 • Zikai Xiao, Zihan Chen, Liyinglan Liu, Yang Feng, Jian Wu, Wanlu Liu, Joey Tianyi Zhou, Howard Hao Yang, Zuozhu Liu
Federated Long-Tailed Learning (Fed-LT), a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution, has garnered considerable attention in recent times.
no code implementations • 23 Dec 2023 • Zihan Chen, Jundong Li, Cong Shen
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions and mitigate the data scarcity issue.
no code implementations • 16 Dec 2023 • Muhammad Azeem Khan, Howard H. Yang, Zihan Chen, Antonio Iera, Nikolaos Pappas
Federated Learning (FL) offers a solution by preserving data privacy during training.
1 code implementation • NeurIPS 2023 • Zikai Xiao, Zihan Chen, Songshang Liu, Hualiang Wang, Yang Feng, Jin Hao, Joey Tianyi Zhou, Jian Wu, Howard Hao Yang, Zuozhu Liu
Data privacy and long-tailed distribution are the norms rather than the exception in many real-world tasks.
no code implementations • 6 Oct 2023 • Zihan Chen, Howard H. Yang, Y. C. Tay, Kai Fong Ernest Chong, Tony Q. S. Quek
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
no code implementations • 4 Oct 2023 • Zihan Chen, Jingyi Sun, Rong Liu, Feng Mai
Although pervasive spread of misinformation on social media platforms has become a pressing challenge, existing platform interventions have shown limited success in curbing its dissemination.
no code implementations • 26 Jul 2023 • Zhiyu Cao, Zihan Chen, Prerna Mishra, Hamed Amini, Zachary Feinstein
Financial contagion has been widely recognized as a fundamental risk to the financial system.
no code implementations • 17 Jun 2023 • Zihan Chen, Howard H. Yang, Tony Q. S. Quek
Federated edge learning is envisioned as the bedrock of enabling intelligence in next-generation wireless networks, but the limited spectral resources often constrain its scalability.
1 code implementation • 28 May 2023 • Zihan Chen, Lei Nico Zheng, Cheng Lu, Jialu Yuan, Di Zhu
However, its potential for inferring dynamic network structures from temporal textual data, specifically financial news, remains an unexplored frontier.
no code implementations • 24 Feb 2023 • Zihan Chen, Zeshen Li, Howard H. Yang, Tony Q. S. Quek
Additionally, we leverage a bi-level optimization framework to personalize the federated learning model so as to cope with the data heterogeneity issue.
1 code implementation • 28 Nov 2022 • Zihan Chen, Ziyue Wang, JunJie Huang, Wentao Zhao, Xiao Liu, Dejian Guan
Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples.
no code implementations • 23 Nov 2022 • Binxin Yang, Xuejin Chen, Chaoqun Wang, Chi Zhang, Zihan Chen, Xiaoyan Sun
With a semantic feature matching loss for effective semantic supervision, our sketch embedding precisely conveys the semantics in the input sketches to the synthesized images.
no code implementations • 30 Jun 2022 • Zihan Chen, Songshang Liu, Hualiang Wang, Howard H. Yang, Tony Q. S. Quek, Zuozhu Liu
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
1 code implementation • CVPR 2022 • Jingyi Xu, Zihan Chen, Tony Q. S. Quek, Kai Fong Ernest Chong
Although there exist methods in centralized learning for tackling label noise, such methods do not perform well on heterogeneous label noise in FL settings, due to the typically smaller sizes of client datasets and data privacy requirements in FL.
no code implementations • 31 Mar 2022 • Zihan Chen, Xingyu Li, Miaomiao Yang, Hong Zhang, Xu Steven Xu
We showed that unsupervised clustering of image patches could help identify predictive patches, exclude patches lack of predictive information, and therefore improve prediction on gene mutations in all three different cancer types, compared with the WSI based method without selection of image patches and models based on only tumor regions.
no code implementations • 12 Aug 2021 • Zihan Chen, Kai Fong Ernest Chong, Tony Q. S. Quek
Federated learning (FL) offers a solution to train a global machine learning model while still maintaining data privacy, without needing access to data stored locally at the clients.
no code implementations • 28 Jul 2021 • Zihan Chen, Marina Sokolova
In this study, we analyzed sentiments of COVID-related messages posted on r/Depression.
1 code implementation • 31 Aug 2020 • Yuhang Li, Xuejin Chen, Binxin Yang, Zihan Chen, Zhihua Cheng, Zheng-Jun Zha
In this paper, we explore the task of generating photo-realistic face images from hand-drawn sketches.