Search Results for author: Leo Betthauser

Found 4 papers, 1 papers with code

Decoding In-Context Learning: Neuroscience-inspired Analysis of Representations in Large Language Models

no code implementations30 Sep 2023 Safoora Yousefi, Leo Betthauser, Hosein Hasanbeig, Raphaël Millière, Ida Momennejad

In this work, we investigate how LLM embeddings and attention representations change following in-context-learning, and how these changes mediate improvement in behavior.

In-Context Learning Reading Comprehension

ALLURE: Auditing and Improving LLM-based Evaluation of Text using Iterative In-Context-Learning

no code implementations24 Sep 2023 Hosein Hasanbeig, Hiteshi Sharma, Leo Betthauser, Felipe Vieira Frujeri, Ida Momennejad

From grading papers to summarizing medical documents, large language models (LLMs) are evermore used for evaluation of text generated by humans and AI alike.

In-Context Learning

Discovering Distribution Shifts using Latent Space Representations

1 code implementation4 Feb 2022 Leo Betthauser, Urszula Chajewska, Maurice Diesendruck, Rohith Pesala

Rapid progress in representation learning has led to a proliferation of embedding models, and to associated challenges of model selection and practical application.

Model Selection Representation Learning

Graded persistence diagrams and persistence landscapes

no code implementations29 Apr 2019 Leo Betthauser, Peter Bubenik, Parker B. Edwards

The sum of the graded persistence diagrams is the persistence diagram.

Algebraic Topology 55N31, 06A07

Cannot find the paper you are looking for? You can Submit a new open access paper.