Search Results for author: Linlu Qiu

Found 14 papers, 10 papers with code

Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models

no code implementations21 Mar 2025 Linlu Qiu, Fei Sha, Kelsey Allen, Yoon Kim, Tal Linzen, Sjoerd van Steenkiste

To evaluate whether contemporary LLMs are able to do so, we use the Bayesian inference framework from probability theory, which lays out the optimal way to update an agent's beliefs as it receives new information.

Bayesian Inference

The Surprising Effectiveness of Test-Time Training for Few-Shot Learning

1 code implementation11 Nov 2024 Ekin Akyürek, Mehul Damani, Adam Zweiger, Linlu Qiu, Han Guo, Jyothish Pari, Yoon Kim, Jacob Andreas

Language models (LMs) have shown impressive performance on tasks within their training distribution, but often struggle with structurally novel tasks even when given a small number of in-context task examples.

ARC Few-Shot Learning +4

Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps

1 code implementation9 Jul 2024 Yung-Sung Chuang, Linlu Qiu, Cheng-Yu Hsieh, Ranjay Krishna, Yoon Kim, James Glass

We find that a linear classifier based on these lookback ratio features is as effective as a richer detector that utilizes the entire hidden states of an LLM or a text-based entailment model.

Hallucination

Learning to Reason via Program Generation, Emulation, and Search

1 code implementation25 May 2024 Nathaniel Weir, Muhammad Khalifa, Linlu Qiu, Orion Weller, Peter Clark

CoGEX works by (1) training LMs to generate pseudo-programs, (2) teaching them to emulate their generated program's execution, including those leaf functions, allowing the LM's knowledge to fill in the execution gaps; and (3) using them to search over many programs to find an optimal one.

Code Generation In-Context Learning +1

Bias Amplification in Language Model Evolution: An Iterated Learning Perspective

1 code implementation4 Apr 2024 Yi Ren, Shangmin Guo, Linlu Qiu, Bailin Wang, Danica J. Sutherland

With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase.

Language Modeling Language Modelling

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

1 code implementation12 Oct 2023 Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence.

Visually Grounded Concept Composition

no code implementations Findings (EMNLP) 2021 BoWen Zhang, Hexiang Hu, Linlu Qiu, Peter Shaw, Fei Sha

We investigate ways to compose complex concepts in texts from primitive ones while grounding them in images.

Sentence

Quasi-Dense Similarity Learning for Multiple Object Tracking

3 code implementations CVPR 2021 Jiangmiao Pang, Linlu Qiu, Xia Li, Haofeng Chen, Qi Li, Trevor Darrell, Fisher Yu

Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets.

Contrastive Learning Metric Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.