no code implementations • 29 Mar 2024 • Musashi Hinck, Matthew L. Olson, David Cobbley, Shao-Yen Tseng, Vasudev Lal
We train a suite of multimodal foundation models (MMFM) using the popular LLaVA framework with the recently released Gemma family of large language models (LLMs).
1 code implementation • 26 Feb 2024 • Paul Röttger, Valentin Hofmann, Valentina Pyatkin, Musashi Hinck, Hannah Rose Kirk, Hinrich Schütze, Dirk Hovy
Motivated by this discrepancy, we challenge the prevailing constrained evaluation paradigm for values and opinions in LLMs and explore more realistic unconstrained evaluations.
no code implementations • NeurIPS 2023 • Naoki Egami, Musashi Hinck, Brandon M. Stewart, Hanying Wei
In most scenarios, CSS researchers first obtain labels for documents and then explain labels using interpretable regression analyses in the second step.