no code implementations • 30 Sep 2023 • Safoora Yousefi, Leo Betthauser, Hosein Hasanbeig, Raphaël Millière, Ida Momennejad
In this work, we investigate how LLM embeddings and attention representations change following in-context-learning, and how these changes mediate improvement in behavior.
no code implementations • 24 Sep 2023 • Hosein Hasanbeig, Hiteshi Sharma, Leo Betthauser, Felipe Vieira Frujeri, Ida Momennejad
From grading papers to summarizing medical documents, large language models (LLMs) are evermore used for evaluation of text generated by humans and AI alike.
1 code implementation • 4 Feb 2022 • Leo Betthauser, Urszula Chajewska, Maurice Diesendruck, Rohith Pesala
Rapid progress in representation learning has led to a proliferation of embedding models, and to associated challenges of model selection and practical application.
no code implementations • 29 Apr 2019 • Leo Betthauser, Peter Bubenik, Parker B. Edwards
The sum of the graded persistence diagrams is the persistence diagram.
Algebraic Topology 55N31, 06A07