2 code implementations • 4 Apr 2024 • Yi Ren, Shangmin Guo, Linlu Qiu, Bailin Wang, Danica J. Sutherland
With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase.
1 code implementation • 12 Oct 2023 • Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren
The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence.
1 code implementation • 5 Jul 2023 • Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills.
2 code implementations • 12 Oct 2022 • Tobias Fischer, Thomas E. Huang, Jiangmiao Pang, Linlu Qiu, Haofeng Chen, Trevor Darrell, Fisher Yu
In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of object regions on a pair of images for contrastive learning.
Ranked #4 on Multiple Object Tracking on BDD100K test
no code implementations • COLING 2022 • Yury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, Fei Sha
Then, it retrieves exemplars with outputs similar to the preliminary prediction which are used to generate a final prediction.
no code implementations • 24 May 2022 • Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, Kristina Toutanova
Meanwhile, recent work has shown considerable improvements on many NLP tasks from model scaling.
2 code implementations • NAACL 2022 • Linlu Qiu, Peter Shaw, Panupong Pasupat, Paweł Krzysztof Nowak, Tal Linzen, Fei Sha, Kristina Toutanova
Generic unstructured neural networks have been shown to struggle on out-of-distribution compositional generalization.
no code implementations • Findings (EMNLP) 2021 • BoWen Zhang, Hexiang Hu, Linlu Qiu, Peter Shaw, Fei Sha
We investigate ways to compose complex concepts in texts from primitive ones while grounding them in images.
2 code implementations • EMNLP 2021 • Linlu Qiu, Hexiang Hu, BoWen Zhang, Peter Shaw, Fei Sha
We analyze the grounded SCAN (gSCAN) benchmark, which was recently proposed to study systematic generalization for grounded language understanding.
3 code implementations • CVPR 2021 • Jiangmiao Pang, Linlu Qiu, Xia Li, Haofeng Chen, Qi Li, Trevor Darrell, Fisher Yu
Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets.
Ranked #1 on One-Shot Object Detection on PASCAL VOC 2012 val