no code implementations • 14 Feb 2024 • Narun Raman, Taylor Lundy, Samuel Amouyal, Yoav Levine, Kevin Leyton-Brown, Moshe Tennenholtz
We begin by surveying the economic literature on rational decision making, taxonomizing a large set of fine-grained "elements" that an agent should exhibit, along with dependencies between them.
no code implementations • 29 Jan 2024 • Yotam Wolf, Noam Wies, Dorin Shteyman, Binyamin Rothberg, Yoav Levine, Amnon Shashua
Representation engineering yields gains in alignment oriented tasks such as resistance to adversarial attacks and reduction of social biases, but was also shown to cause a decrease in the ability of the model to perform basic tasks.
2 code implementations • 13 Jul 2023 • Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham
FACTOR automatically transforms a factual corpus of interest into a benchmark evaluating an LM's propensity to generate true facts from the corpus vs. similar but incorrect statements.
no code implementations • 31 May 2023 • Daniel Jannai, Amos Meron, Barak Lenz, Yoav Levine, Yoav Shoham
Over the course of a month, the game was played by over 1. 5 million users who engaged in anonymous two-minute chat sessions with either another human or an AI language model which was prompted to behave like humans.
no code implementations • 19 Apr 2023 • Yotam Wolf, Noam Wies, Oshri Avnery, Yoav Levine, Amnon Shashua
An important aspect in developing language models that interact with humans is aligning their behavior to be useful and unharmful for their human users.
1 code implementation • 31 Jan 2023 • Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham
Retrieval-Augmented Language Modeling (RALM) methods, which condition a language model (LM) on relevant documents from a grounding corpus during generation, were shown to significantly improve language modeling performance.
1 code implementation • 21 Dec 2022 • Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham
We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training.
no code implementations • 1 May 2022 • Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen, Erez Schwartz, Gal Shachaf, Shai Shalev-Shwartz, Amnon Shashua, Moshe Tenenholtz
Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks.
no code implementations • 21 Apr 2022 • Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham
To demonstrate this, we introduce three novel methods for leveraging frozen models: input-dependent prompt tuning, frozen readers, and recursive LMs, each of which vastly improves on current frozen-model approaches.
1 code implementation • 6 Apr 2022 • Noam Wies, Yoav Levine, Amnon Shashua
Recently, several works have demonstrated high gains by taking a straightforward approach for incorporating intermediate supervision in compounded natural language problems: the sequence-to-sequence LM is fed with an augmented input, in which the decomposed tasks' labels are simply concatenated to the original input.
no code implementations • ICLR 2022 • Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, Amnon Shashua
We highlight a bias introduced by this common practice: we prove that the pretrained NLM can model much stronger dependencies between text segments that appeared in the same training example, than it can between text segments that appeared in different training examples.
no code implementations • 9 May 2021 • Noam Wies, Yoav Levine, Daniel Jannai, Amnon Shashua
After their successful debut in natural language processing, Transformer architectures are now becoming the de-facto standard in many domains.
1 code implementation • ICLR 2021 • Yoav Levine, Barak Lenz, Opher Lieber, Omri Abend, Kevin Leyton-Brown, Moshe Tennenholtz, Yoav Shoham
Specifically, we show experimentally that PMI-Masking reaches the performance of prior masking approaches in half the training time, and consistently improves performance at the end of training.
1 code implementation • NeurIPS 2020 • Yoav Levine, Noam Wies, Or Sharir, Hofit Bata, Amnon Shashua
Our guidelines elucidate the depth-to-width trade-off in self-attention networks of sizes up to the scale of GPT3 (which we project to be too deep for its size), and beyond, marking an unprecedented width of 30K as optimal for a 1-Trillion parameter network.
no code implementations • ACL 2020 • Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, Yoav Shoham
The ability to learn from large unlabeled corpora has allowed neural language models to advance the frontier in natural language understanding.
Ranked #11 on Word Sense Disambiguation on Words in Context
2 code implementations • 11 Feb 2019 • Or Sharir, Yoav Levine, Noam Wies, Giuseppe Carleo, Amnon Shashua
Artificial Neural Networks were recently shown to be an efficient representation of highly-entangled many-body quantum states.
no code implementations • 26 Mar 2018 • Yoav Levine, Or Sharir, Nadav Cohen, Amnon Shashua
Modern deep learning has enabled unprecedented achievements in various domains.
no code implementations • ICLR 2018 • Yoav Levine, Or Sharir, Amnon Shashua
We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts.
1 code implementation • 25 Oct 2017 • Yoav Levine, Or Sharir, Alon Ziv, Amnon Shashua
A key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ability to model intricate long-term temporal dependencies.
no code implementations • 5 May 2017 • Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, Amnon Shashua
Expressive efficiency refers to the ability of a network architecture to realize functions that require an alternative architecture to be much larger.
no code implementations • ICLR 2018 • Yoav Levine, David Yakira, Nadav Cohen, Amnon Shashua
This description enables us to carry a graph-theoretic analysis of a convolutional network, with which we demonstrate a direct control over the inductive bias of the deep network via its channel numbers, that are related to the min-cut in the underlying graph.