no code implementations • 5 Nov 2024 • Jason Vega, Junsheng Huang, Gaokai Zhang, Hangoo Kang, Minjia Zhang, Gagandeep Singh
Safety alignment of Large Language Models (LLMs) has recently become a critical objective of model developers.
no code implementations • 18 Oct 2024 • Li Yuan, Yi Cai, Junsheng Huang
This method can effectively address the problem of insufficient information in the few-shot setting by guiding a large language model to generate supplementary background knowledge.
no code implementations • 7 Jun 2024 • Jie Deng, Wenhao Chai, Junsheng Huang, Zhonghan Zhao, Qixuan Huang, Mingyan Gao, Jianshu Guo, Shengyu Hao, Wenhao Hu, Jenq-Neng Hwang, Xi Li, Gaoang Wang
The rendered scenes lack variety, resembling the training images, resulting in monotonous styles.