Search Results for author: Eran Yahav

Found 19 papers, 14 papers with code

How Attentive are Graph Attention Networks?

8 code implementations ICLR 2022 Shaked Brody, Uri Alon, Eran Yahav

Because GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data.

Graph Attention Graph Property Prediction +3

code2vec: Learning Distributed Representations of Code

9 code implementations26 Mar 2018 Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav

We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body.

Adversarial Examples for Models of Code

3 code implementations15 Oct 2019 Noam Yefet, Uri Alon, Eran Yahav

Our evaluations demonstrate that DAMP has up to 89% success rate in changing a prediction to the adversary's choice (a targeted attack) and a success rate of up to 94% in changing a given prediction to any incorrect prediction (a non-targeted attack).

code2seq: Generating Sequences from Structured Representations of Code

6 code implementations ICLR 2019 Uri Alon, Shaked Brody, Omer Levy, Eran Yahav

The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval.

Code Summarization NMT +3

Thinking Like Transformers

4 code implementations13 Jun 2021 Gail Weiss, Yoav Goldberg, Eran Yahav

In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language.

A General Path-Based Representation for Predicting Program Properties

3 code implementations26 Mar 2018 Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav

A major challenge when learning from programs is $\textit{how to represent programs in a way that facilitates effective learning}$.

Neural Reverse Engineering of Stripped Binaries using Augmented Control Flow Graphs

1 code implementation25 Feb 2019 Yaniv David, Uri Alon, Eran Yahav

This is a challenging problem because of the low amount of syntactic information available in stripped executables, and the diverse assembly code patterns arising from compiler optimizations.

On the Bottleneck of Graph Neural Networks and its Practical Implications

2 code implementations ICLR 2021 Uri Alon, Eran Yahav

Since the proposal of the graph neural network (GNN) by Gori et al. (2005) and Scarselli et al. (2008), one of the major problems in training GNNs was their struggle to propagate information between distant nodes in the graph.

Structural Language Models of Code

2 code implementations ICML 2020 Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav

We introduce a new approach to any-code completion that leverages the strict syntax of programming languages to model a code snippet as a tree - structural language modeling (SLM).

C++ code Code Completion +2

On the Expressivity Role of LayerNorm in Transformers' Attention

1 code implementation4 May 2023 Shaked Brody, Uri Alon, Eran Yahav

Layer Normalization (LayerNorm) is an inherent component in all Transformer-based models.

Language Modelling

A Structural Model for Contextual Code Changes

1 code implementation27 May 2020 Shaked Brody, Uri Alon, Eran Yahav

We conduct a thorough evaluation, comparing our approach to a variety of representation and modeling approaches that are driven by multiple strong models such as LSTMs, Transformers, and neural CRFs.

EditCompletion

Learning Deterministic Weighted Automata with Queries and Counterexamples

1 code implementation NeurIPS 2019 Gail Weiss, Yoav Goldberg, Eran Yahav

We present an algorithm for extraction of a probabilistic deterministic finite automaton (PDFA) from a given black-box language model, such as a recurrent neural network (RNN).

Language Modelling

On the Practical Computational Power of Finite Precision RNNs for Language Recognition

1 code implementation ACL 2018 Gail Weiss, Yoav Goldberg, Eran Yahav

While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time.

Learning Disjunctions of Predicates

no code implementations15 Jun 2017 Nader H. Bshouty, Dana Drachsler-Cohen, Martin Vechev, Eran Yahav

Our algorithm asks at most $|F| \cdot OPT(F_\vee)$ membership queries where $OPT(F_\vee)$ is the minimum worst case number of membership queries for learning $F_\vee$.

Program Synthesis

Towards Neural Decompilation

no code implementations20 May 2019 Omer Katz, Yuval Olshaker, Yoav Goldberg, Eran Yahav

We address the problem of automatic decompilation, converting a program in low-level representation back to a higher-level human-readable programming language.

C++ code Machine Translation +1

A Formal Hierarchy of RNN Architectures

no code implementations ACL 2020 William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, Eran Yahav

While formally extending these findings to unsaturated RNNs is left to future work, we hypothesize that the practical learnable capacity of unsaturated RNNs obeys a similar hierarchy.

Structural Language Models for Any-Code Generation

no code implementations25 Sep 2019 Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav

We introduce a new approach to AnyGen that leverages the strict syntax of programming languages to model a code snippet as tree structural language modeling (SLM).

C++ code Code Generation +1

Diffusing Graph Attention

no code implementations1 Mar 2023 Daniel Glickman, Eran Yahav

The dominant paradigm for machine learning on graphs uses Message Passing Graph Neural Networks (MP-GNNs), in which node representations are updated by aggregating information in their local neighborhood.

Graph Attention Graph Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.