Search Results for author: Veniamin Veselovsky

Found 5 papers, 3 papers with code

Do Llamas Work in English? On the Latent Language of Multilingual Transformers

1 code implementation16 Feb 2024 Chris Wendler, Veniamin Veselovsky, Giovanni Monea, Robert West

Tracking intermediate embeddings through their high-dimensional space reveals three distinct phases, whereby intermediate embeddings (1) start far away from output token embeddings; (2) already allow for decoding a semantically correct next token in the middle layers, but give higher probability to its version in English than in the input language; (3) finally move into an input-language-specific region of the embedding space.

Prevalence and prevention of large language model use in crowd work

no code implementations24 Oct 2023 Veniamin Veselovsky, Manoel Horta Ribeiro, Philip Cozzolino, Andrew Gordon, David Rothschild, Robert West

We show that the use of large language models (LLMs) is prevalent among crowd workers, and that targeted mitigation strategies can significantly reduce, but not eliminate, LLM use.

Language Modelling Large Language Model +1

Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks

1 code implementation13 Jun 2023 Veniamin Veselovsky, Manoel Horta Ribeiro, Robert West

With the widespread adoption of LLMs, human gold--standard annotations are key to understanding the capabilities of LLMs and the validity of their results.

text-classification Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.