no code implementations • 1 Aug 2024 • Bozhou Li, Hao Liang, Zimo Meng, Wentao Zhang
Moreover, we analyzed the effects of LLM backbone parameter size and data quality on the pretraining outcomes.
1 code implementation • 26 May 2024 • Tianyi Bai, Hao Liang, Binwang Wan, Yanran Xu, Xi Li, Shiyu Li, Ling Yang, Bozhou Li, Yifan Wang, Bin Cui, Ping Huang, Jiulong Shan, Conghui He, Binhang Yuan, Wentao Zhang
Multimodal large language models (MLLMs) enhance the capabilities of standard large language models by integrating and processing data from multiple modalities, including text, vision, audio, video, and 3D environments.