no code implementations • 1 May 2024 • Junsang Yoon, Akshat Gupta, Gopala Anumanchipalli
This study presents a targeted model editing analysis focused on the latest large language model, Llama-3.
2 code implementations • 21 Mar 2024 • Akshat Gupta, Dev Sajnani, Gopala Anumanchipalli
We introduce a unifying framework that brings two leading "locate-and-edit" model editing techniques -- ROME and MEMIT -- under a single conceptual umbrella, optimizing for the same goal, which we call the preservation-memorization objective.
1 code implementation • 11 Mar 2024 • Akshat Gupta, Sidharth Baskaran, Gopala Anumanchipalli
With this paper, we provide a more stable implementation ROME, which we call r-ROME and show that model collapse is no longer observed when making large scale sequential edits with r-ROME, while further improving generalization and locality of model editing compared to the original implementation of ROME.
no code implementations • 22 Feb 2024 • Xiaoyang Song, Yuta Adachi, Jessie Feng, Mouwei Lin, Linhao Yu, Frank Li, Akshat Gupta, Gopala Anumanchipalli, Simerjot Kaur
In this paper, we investigate LLM personalities using an alternate personality measurement method, which we refer to as the external evaluation method, where instead of prompting LLMs with multiple-choice questions in the Likert scale, we evaluate LLMs' personalities by analyzing their responses toward open-ended situational questions using an external machine learning model.
no code implementations • 18 Jan 2024 • Jiachen Lian, Gopala Anumanchipalli
Speech disfluency modeling is the bottleneck for both speech therapy and language learning.
no code implementations • 15 Jan 2024 • Akshat Gupta, Anurag Rao, Gopala Anumanchipalli
With this in mind, we evaluate the current model editing methods at scale, focusing on two state of the art methods: ROME and MEMIT.
no code implementations • 4 Oct 2023 • Robin Netzorg, Bohan Yu, Andrea Guzman, Peter Wu, Luna McNulty, Gopala Anumanchipalli
Unlike other data modalities such as text and vision, speech does not lend itself to easy interpretation.
no code implementations • 15 Sep 2023 • Akshat Gupta, Xiaoyang Song, Gopala Anumanchipalli
These simple tests, done on ChatGPT and three Llama2 models of different sizes, show that self-assessment personality tests created for humans are unreliable measures of personality in LLMs.