no code implementations • 4 Feb 2025 • Jianfeng Pan, Senyou Deng, Shaomang Huang
Research on LLM technologies is rapidly emerging, with most of them employing a 'fast thinking' approach to inference.
no code implementations • 16 Jul 2024 • Shaomang Huang, Jianfeng Pan, Hanzhong Zheng
With the growing needs of applying LLMs on various domains, it is a research question that how to efficiently train and build a model that has expertise in different domains but with a low training cost.