no code implementations • 18 Nov 2024 • Huashan Sun, Yang Gao
In this paper, we discover that when a forgetting model passively receives an externally provided partial appropriate rationale, its performance on the forgotten task can be restored.
no code implementations • 17 Jun 2024 • Heyan Huang, Yinghao Li, Huashan Sun, Yu Bai, Yang Gao
Recent studies have demonstrated that In-Context Learning (ICL), through the use of specific demonstrations, can align Large Language Models (LLMs) with human preferences known as In-Context Alignment (ICA), indicating that models can comprehend human instructions without requiring parameter adjustments.
2 code implementations • 14 Nov 2023 • Huashan Sun, Yixiao Wu, Yuhao Ye, Yizhe Yang, Yinghao Li, Jiawei Li, Yang Gao
Language style is necessary for AI systems to understand and generate diverse human language accurately.
no code implementations • 24 Oct 2023 • Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.
no code implementations • 19 Oct 2023 • Yiming Wang, Qian Huang, Bin Tang, Huashan Sun, Xing Li
In addition, most approaches ignore the spatial and channel redundancy.