no code implementations • 1 May 2025 • Zixin Wang, Yuanming Shi, Yong Zhou, Jingyang Zhu, Khaled. B. Letaief
Large artificial intelligence models (LAMs) possess human-like abilities to solve a wide range of real-world problems, exemplifying the potential of experts in various domains and modalities.
no code implementations • 12 Nov 2024 • Tianqu Kang, Zixin Wang, Hengtao He, Jun Zhang, Shenghui Song, Khaled B. Letaief
Fine-tuning large pre-trained foundation models (FMs) on distributed edge devices presents considerable computational and privacy challenges.
1 code implementation • 5 Nov 2024 • Jingyu Xiao, Yuxuan Wan, Yintong Huo, Zixin Wang, Xinyi Xu, Wenxuan Wang, Zhiyao Xu, Yuhang Wang, Michael R. Lyu
To address these limitations, we propose four enhancement strategies: Interactive Element Highlighting, Failureaware Prompting (FAP), Visual Saliency Enhancement, and Visual-Textual Descriptions Combination, all aiming at improving MLLMs' performance on the Interaction-toCode task.
1 code implementation • 16 Oct 2024 • Zixin Wang, Dong Gong, Sen Wang, Zi Huang, Yadan Luo
To address these questions, we propose Token Condensation as Adaptation (TCA), a training-free adaptation method for CLIP by pruning class-irrelevant visual tokens while merging class-ambiguous tokens.
no code implementations • 22 Aug 2024 • Qiming Yang, Zixin Wang, Shinan Liu, Zizheng Li
In recent years, although U-Net network has made significant progress in the field of image segmentation, it still faces performance bottlenecks in remote sensing image segmentation.
1 code implementation • 4 Aug 2024 • Qinshuo Liu, Zixin Wang, Xi-An Li, Xinyao Ji, Lei Zhang, Lin Liu, Zhonghua Liu
Semiparametric statistics play a pivotal role in a wide range of domains, including but not limited to missing data, causal inference, and transfer learning, to name a few.
no code implementations • 3 Jul 2024 • Zixin Wang, Yong Zhou, Yuanming Shi, Khaled. B. Letaief
In particular, by integrating low-rank adaptation (LoRA) with federated learning (FL), federated LoRA enables the collaborative FT of a global model with edge devices, achieving comparable learning performance to full FT while training fewer parameters over distributed data and preserving raw data privacy.
1 code implementation • 19 Jun 2024 • Zhuoxiao Chen, Zixin Wang, Yadan Luo, Sen Wang, Zi Huang
We minimize the sharpness to cultivate a flat loss landscape to ensure model resiliency to minor data variations, thereby enhancing the generalization of the adaptation process.
no code implementations • 12 May 2024 • Zixin Wang, Kongyang Chen
Machine unlearning is a complex process that necessitates the model to diminish the influence of the training data while keeping the loss of accuracy to a minimum.
1 code implementation • 31 Oct 2023 • Zixin Wang, Yadan Luo, Liang Zheng, Zhuoxiao Chen, Sen Wang, Zi Huang
This article presents a comprehensive survey of online test-time adaptation (OTTA), focusing on effectively adapting machine learning models to distributionally different target data upon batch arrival.
1 code implementation • 16 Oct 2023 • Zhuoxiao Chen, Yadan Luo, Zixin Wang, Zijian Wang, Xin Yu, Zi Huang
This paper investigates a more practical and challenging research task: Open World Active Learning for 3D Object Detection (OWAL-3D), aimed at acquiring informative point clouds with new concepts.
1 code implementation • 6 Aug 2023 • Zixin Wang, Yadan Luo, Zhi Chen, Sen Wang, Zi Huang
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage, where private information from the source domain could inadvertently be exposed in the target domain.
no code implementations • 3 Jan 2023 • Yandong Shi, Lixiang Lian, Yuanming Shi, Zixin Wang, Yong Zhou, Liqun Fu, Lin Bai, Jun Zhang, Wei zhang
The sixth generation (6G) wireless systems are envisioned to enable the paradigm shift from "connected things" to "connected intelligence", featured by ultra high density, large-scale, dynamic heterogeneity, diversified functional requirements and machine learning capabilities, which leads to a growing need for highly efficient intelligent algorithms.
no code implementations • 13 Aug 2022 • Zhanpeng Yang, Yuanming Shi, Yong Zhou, Zixin Wang, Kai Yang
In this paper, we shall propose a decentralized blockchain based FL (B-FL) architecture by using a secure global aggregation algorithm to resist malicious devices, and deploying practical Byzantine fault tolerance consensus protocol with high effectiveness and low energy consumption among multiple edge servers to prevent model tampering from the malicious server.
1 code implementation • 11 Jul 2022 • Zixin Wang, Yadan Luo, Peng-Fei Zhang, Sen Wang, Zi Huang
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
no code implementations • 28 Mar 2022 • Yinan Zou, Zixin Wang, Xu Chen, Haibo Zhou, Yong Zhou
Based on the convergence analysis, we formulate an optimization problem to minimize the upper bound to enhance the learning performance, followed by proposing an alternating optimization algorithm to facilitate the optimal transceiver design for AirComp-assisted FL.
no code implementations • 22 Dec 2020 • Shiqi Sheng, Haijun Yang, Liuhua Mu, Zixin Wang, Jihong Wang, Peng Xiu, Jun Hu, Xin Zhang, Feng Zhang, Haiping Fang
We experimentally demonstrated that the AYFFF self-assemblies adsorbed with various monovalent cations (Na+, K+, and Li+) show unexpectedly super strong paramagnetism.
Biological Physics