1 code implementation • 6 Feb 2025 • Letian Peng, Chenyang An, Shibo Hao, chengyu dong, Jingbo Shang
The generalization of language models (LMs) is undergoing active debates, contrasting their potential for general intelligence with their struggles with basic knowledge composition (e. g., reverse/transition curse).
no code implementations • 5 Dec 2024 • Ali Abbasi, Shima Imani, Chenyang An, Gayathri Mahalingam, Harsh Shrivastava, Maurice Diesendruck, Hamed Pirsiavash, Pramod Sharma, Soheil Kolouri
Next, we leverage a generative foundation model to dynamically expand this compressed set in real-time, enhancing the resolution of these patches and introducing controlled variability to the coreset.
1 code implementation • 3 Oct 2024 • Letian Peng, Chenyang An, Jingbo Shang
In this paper, we study the effect of the key distribution on the NTP distribution, with a focus on whether the similarity between keys will trigger spurious correlations in NTP.
1 code implementation • 10 Apr 2024 • Chenyang An, Zhibo Chen, Qihao Ye, Emily First, Letian Peng, Jiayun Zhang, Zihan Wang, Sorin Lerner, Jingbo Shang
Recent advances in Automated Theorem Proving have shown the effectiveness of leveraging a (large) language model that generates tactics (i. e. proof steps) to search through proof states.