no code implementations • 26 Dec 2023 • Yida Chen, Yixian Gan, Sijia Li, Li Yao, Xiaohan Zhao
Recent work found high mutual information between the learned representations of large language models (LLMs) and the geospatial property of its input, hinting an emergent internal model of space.
no code implementations • 14 Nov 2022 • Yi Liu, Song Guo, Jie Zhang, Qihua Zhou, Yingchun Wang, Xiaohan Zhao
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
no code implementations • 15 Sep 2022 • Xingyu Qu, Diyang Li, Xiaohan Zhao, Bin Gu
The SPL regime involves a self-paced regularizer and a gradually increasing age parameter, which plays a key role in SPL but where to optimally terminate this process is still non-trivial to determine.
no code implementations • 31 Dec 2020 • Shuai Liu, Xinran Xu, Zhihao Yang, Xiaohan Zhao, Wen Zhang
The computational experiments show that EPIHC outperforms the existing state-of-the-art EPI prediction methods on the benchmark datasets and chromosome-split datasets, and the study reveal that the communicative learning module can bring explicit information about EPIs, which is ignore by CNN.