1 code implementation • 12 Apr 2024 • Li Zhang, Shihe Wang, Xianqing Jia, Zhihan Zheng, Yunhe Yan, Longxi Gao, Yuanchun Li, Mengwei Xu
The emergent large language/multimodal models facilitate the evolution of mobile agents, especially in the task of mobile UI automation.
1 code implementation • 16 Jan 2024 • Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, QiPeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, Qiyang Zhang, Zhenyan Lu, Li Zhang, Shangguang Wang, Yuanchun Li, Yunxin Liu, Xin Jin, Xuanzhe Liu
Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment.
1 code implementation • 28 Aug 2023 • Jinliang Yuan, Chen Yang, Dongqi Cai, Shihe Wang, Xin Yuan, Zeling Zhang, Xiang Li, Dingge Zhang, Hanzi Mei, Xianqing Jia, Shangguang Wang, Mengwei Xu
Concurrently, each app contributes a concise, offline fine-tuned "adapter" tailored to distinct downstream tasks.
no code implementations • 20 Sep 2022 • Shihe Wang, Jianfeng Ren, Xiaoyu Lian, Ruibin Bai, Xudong Jiang
In this paper, we propose a feature augmentation method employing a stack auto-encoder to reduce the noise in the data and boost the discriminant power of naive Bayes.
no code implementations • 20 Sep 2022 • Shihe Wang, Jianfeng Ren, Ruibin Bai, Yuan YAO, Xudong Jiang
Thus, we propose a Max-Dependency-Min-Divergence (MDmD) criterion that maximizes both the discriminant information and generalization ability of the discretized data.
1 code implementation • 22 Nov 2021 • Shihe Wang, Jianfeng Ren, Ruibin Bai
Data discretization is important in naive Bayes.