Search Results for author: Uri Alon

Found 15 papers, 12 papers with code

A Systematic Evaluation of Large Language Models of Code

2 code implementations26 Feb 2022 Frank F. Xu, Uri Alon, Graham Neubig, Vincent J. Hellendoorn

We aim to fill in some of these blanks through a systematic evaluation of the largest existing models: Codex, GPT-J, GPT-Neo, GPT-NeoX-20B, and CodeParrot, across various programming languages.

Language Modelling

Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval

no code implementations28 Jan 2022 Uri Alon, Frank F. Xu, Junxian He, Sudipta Sengupta, Dan Roth, Graham Neubig

Retrieval-based language models (R-LM) model the probability of natural language text by combining a standard language model (LM) with examples retrieved from an external datastore at test time.

Language Modelling

How Attentive are Graph Attention Networks?

5 code implementations ICLR 2022 Shaked Brody, Uri Alon, Eran Yahav

Because GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data.

Graph Attention Graph Property Prediction +3

Single-Node Attack for Fooling Graph Neural Networks

1 code implementation6 Nov 2020 Ben Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, Uri Alon

When the adversary is allowed to pick a specific attacker node, the attack is even more effective.

On the Bottleneck of Graph Neural Networks and its Practical Implications

1 code implementation ICLR 2021 Uri Alon, Eran Yahav

Since the proposal of the graph neural network (GNN) by Gori et al. (2005) and Scarselli et al. (2008), one of the major problems in training GNNs was their struggle to propagate information between distant nodes in the graph.

A Structural Model for Contextual Code Changes

1 code implementation27 May 2020 Shaked Brody, Uri Alon, Eran Yahav

We conduct a thorough evaluation, comparing our approach to a variety of representation and modeling approaches that are driven by multiple strong models such as LSTMs, Transformers, and neural CRFs.

EditCompletion

Adversarial Examples for Models of Code

3 code implementations15 Oct 2019 Noam Yefet, Uri Alon, Eran Yahav

Our evaluations demonstrate that DAMP has up to 89% success rate in changing a prediction to the adversary's choice (a targeted attack) and a success rate of up to 94% in changing a given prediction to any incorrect prediction (a non-targeted attack).

Structural Language Models of Code

2 code implementations ICML 2020 Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav

We introduce a new approach to any-code completion that leverages the strict syntax of programming languages to model a code snippet as a tree - structural language modeling (SLM).

Code Completion Code Generation +1

Structural Language Models for Any-Code Generation

no code implementations25 Sep 2019 Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav

We introduce a new approach to AnyGen that leverages the strict syntax of programming languages to model a code snippet as tree structural language modeling (SLM).

Code Generation Language Modelling

Neural Reverse Engineering of Stripped Binaries using Augmented Control Flow Graphs

1 code implementation25 Feb 2019 Yaniv David, Uri Alon, Eran Yahav

This is a challenging problem because of the low amount of syntactic information available in stripped executables, and the diverse assembly code patterns arising from compiler optimizations.

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

3 code implementations21 Feb 2019 Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models.

Sequence-To-Sequence Speech Recognition

Contextual Speech Recognition with Difficult Negative Training Examples

no code implementations29 Oct 2018 Uri Alon, Golan Pundak, Tara N. Sainath

Improving the representation of contextual information is key to unlocking the potential of end-to-end (E2E) automatic speech recognition (ASR).

Automatic Speech Recognition

code2seq: Generating Sequences from Structured Representations of Code

6 code implementations ICLR 2019 Uri Alon, Shaked Brody, Omer Levy, Eran Yahav

The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval.

Code Summarization Source Code Summarization +1

A General Path-Based Representation for Predicting Program Properties

3 code implementations26 Mar 2018 Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav

A major challenge when learning from programs is $\textit{how to represent programs in a way that facilitates effective learning}$.

code2vec: Learning Distributed Representations of Code

9 code implementations26 Mar 2018 Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav

We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body.

Cannot find the paper you are looking for? You can Submit a new open access paper.