no code implementations • 25 Mar 2024 • Yingshan Chang, Yasi Zhang, Zhiyuan Fang, YingNian Wu, Yonatan Bisk, Feng Gao
We hypothesize that the underlying phenomenological coverage has not been proportionally scaled up, leading to a skew of the presented phenomenon which harms generalization.
no code implementations • 9 Dec 2023 • Zhou Ziheng, YingNian Wu, Song-Chun Zhu, Demetri Terzopoulos
We introduce Aligner, a novel Parameter-Efficient Fine-Tuning (PEFT) method for aligning multi-billion-parameter-sized Large Language Models (LLMs).
no code implementations • 10 Sep 2023 • Yaxuan Zhu, Jianwen Xie, YingNian Wu, Ruiqi Gao
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming, and there exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
no code implementations • 26 Jun 2023 • Weinan Song, Yaxuan Zhu, Lei He, YingNian Wu, Jianwen Xie
The components of translator, style encoder, and style generator constitute a diversified image generator.
no code implementations • ACL 2021 • Wenjuan Han, Bo Pang, YingNian Wu
Transfer learning with large pretrained transformer-based language models like BERT has become a dominating approach for most NLP tasks.