1 code implementation • 12 Sep 2023 • Shihong Liu, Zhiqiu Lin, Samuel Yu, Ryan Lee, Tiffany Ling, Deepak Pathak, Deva Ramanan
We highlight the advantage of conversational feedback that incorporates both positive and negative prompts, suggesting that LLMs can utilize the implicit gradient direction in textual feedback for a more efficient search.
1 code implementation • CVPR 2023 • Zhiqiu Lin, Samuel Yu, Zhiyi Kuang, Deepak Pathak, Deva Ramanan
By repurposing class names as additional one-shot training samples, we achieve SOTA results with an embarrassingly simple linear classifier for vision-language adaptation.
1 code implementation • 21 Mar 2022 • Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
Our paper takes a step towards real-world physical commonsense reasoning by contributing PACS: the first audiovisual benchmark annotated for physical commonsense attributes.
1 code implementation • 14 Sep 2019 • Samuel Yu, Heon Lee, Jung Hoon Kim
In this paper, we address an issue that the visually impaired commonly face while crossing intersections and propose a solution that takes form as a mobile application.
1 code implementation • 23 Jul 2019 • Samuel Yu, Heon Lee, John Kim
LYTNet delivers both of the two most important pieces of information for the visually impaired to cross the road.