Search Results for author: Itay Itzhak

Found 2 papers, 2 papers with code

Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias

1 code implementation1 Aug 2023 Itay Itzhak, Gabriel Stanovsky, Nir Rosenfeld, Yonatan Belinkov

Recent studies show that instruction tuning (IT) and reinforcement learning from human feedback (RLHF) improve the abilities of large language models (LMs) dramatically.

Decision Making

Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens

1 code implementation NAACL 2022 Itay Itzhak, Omer Levy

Standard pretrained language models operate on sequences of subword tokens without direct access to the characters that compose each token's string representation.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.