1 code implementation • 4 May 2023 • Shaked Brody, Uri Alon, Eran Yahav
Layer Normalization (LayerNorm) is an inherent component in all Transformer-based models.
no code implementations • 1 Mar 2023 • Daniel Glickman, Eran Yahav
The dominant paradigm for machine learning on graphs uses Message Passing Graph Neural Networks (MP-GNNs), in which node representations are updated by aggregating information in their local neighborhood.
Ranked #2 on
Link Prediction
on PCQM-Contact
4 code implementations • 13 Jun 2021 • Gail Weiss, Yoav Goldberg, Eran Yahav
In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language.
8 code implementations • ICLR 2022 • Shaked Brody, Uri Alon, Eran Yahav
Because GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data.
Ranked #2 on
Graph Regression
on Lipophilicity
3 code implementations • ICLR 2021 • Uri Alon, Eran Yahav
Since the proposal of the graph neural network (GNN) by Gori et al. (2005) and Scarselli et al. (2008), one of the major problems in training GNNs was their struggle to propagate information between distant nodes in the graph.
1 code implementation • 27 May 2020 • Shaked Brody, Uri Alon, Eran Yahav
We conduct a thorough evaluation, comparing our approach to a variety of representation and modeling approaches that are driven by multiple strong models such as LSTMs, Transformers, and neural CRFs.
Ranked #1 on
EditCompletion
on C# EditCompletion
no code implementations • ACL 2020 • William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, Eran Yahav
While formally extending these findings to unsaturated RNNs is left to future work, we hypothesize that the practical learnable capacity of unsaturated RNNs obeys a similar hierarchy.
1 code implementation • NeurIPS 2019 • Gail Weiss, Yoav Goldberg, Eran Yahav
We present an algorithm for extraction of a probabilistic deterministic finite automaton (PDFA) from a given black-box language model, such as a recurrent neural network (RNN).
3 code implementations • 15 Oct 2019 • Noam Yefet, Uri Alon, Eran Yahav
Our evaluations demonstrate that DAMP has up to 89% success rate in changing a prediction to the adversary's choice (a targeted attack) and a success rate of up to 94% in changing a given prediction to any incorrect prediction (a non-targeted attack).
2 code implementations • ICML 2020 • Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav
We introduce a new approach to any-code completion that leverages the strict syntax of programming languages to model a code snippet as a tree - structural language modeling (SLM).
no code implementations • 25 Sep 2019 • Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav
We introduce a new approach to AnyGen that leverages the strict syntax of programming languages to model a code snippet as tree structural language modeling (SLM).
no code implementations • 20 May 2019 • Omer Katz, Yuval Olshaker, Yoav Goldberg, Eran Yahav
We address the problem of automatic decompilation, converting a program in low-level representation back to a higher-level human-readable programming language.
1 code implementation • 25 Feb 2019 • Yaniv David, Uri Alon, Eran Yahav
This is a challenging problem because of the low amount of syntactic information available in stripped executables, and the diverse assembly code patterns arising from compiler optimizations.
6 code implementations • ICLR 2019 • Uri Alon, Shaked Brody, Omer Levy, Eran Yahav
The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval.
1 code implementation • ACL 2018 • Gail Weiss, Yoav Goldberg, Eran Yahav
While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time.
3 code implementations • 26 Mar 2018 • Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav
A major challenge when learning from programs is $\textit{how to represent programs in a way that facilitates effective learning}$.
9 code implementations • 26 Mar 2018 • Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav
We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body.
1 code implementation • ICML 2018 • Gail Weiss, Yoav Goldberg, Eran Yahav
We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle.
no code implementations • 15 Jun 2017 • Nader H. Bshouty, Dana Drachsler-Cohen, Martin Vechev, Eran Yahav
Our algorithm asks at most $|F| \cdot OPT(F_\vee)$ membership queries where $OPT(F_\vee)$ is the minimum worst case number of membership queries for learning $F_\vee$.