no code implementations • 19 Jun 2023 • Genta Indra Winata, Liang-Kang Huang, Soumya Vadlamannati, Yash Chandarana
Transformer-based language models have achieved remarkable success in few-shot in-context learning and drawn a lot of research interest.
no code implementations • CVPR 2018 • Hsiao-Yu Fish Tung, Adam W. Harley, Liang-Kang Huang, Katerina Fragkiadaki
Humans effortlessly "program" one another by communicating goals and desires in natural language.
no code implementations • ICCV 2017 • Yao-Hung Hubert Tsai, Liang-Kang Huang, Ruslan Salakhutdinov
Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes.
Ranked #5 on Generalized Few-Shot Learning on CUB