no code implementations • 19 Aug 2023 • Federico Cassano, John Gouwar, Francesca Lucchetti, Claire Schlesinger, Anders Freeman, Carolyn Jane Anderson, Molly Q Feldman, Michael Greenberg, Abhinav Jangda, Arjun Guha
We apply this approach to generate tens of thousands of validated training items for Julia, Lua, OCaml, R, and Racket.
1 code implementation • 17 Aug 2022 • Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, Arjun Guha, Michael Greenberg, Abhinav Jangda
Using these new parallel benchmarks, we evaluate the multi-language performance of three state-of-the-art code generation models: Codex, CodeGen, and InCoder.
2 code implementations • 12 May 2021 • Abhinav Jangda, Jun Huang, Guodong Liu, Amir Hossein Nodehi Sabet, Saeed Maleki, Youshan Miao, Madanlal Musuvathi, Todd Mytkowicz, Olli Sarikivi
Therefore, we present CoCoNeT, with a DSL to express a program with both computation and communication.
no code implementations • 14 Sep 2020 • Abhinav Jangda, Sandeep Polisetty, Arjun Guha, Marco Serafini
Several representation learning algorithms for graph data, such as DeepWalk, node2vec, and GraphSAGE, sample the graph to produce mini-batches that are suitable for training a DNN.
no code implementations • 16 Jan 2019 • Abhinav Jangda, Gaurav Anand
We predict the types of all the identifiers given the Abstract Syntax Tree by performing just two passes over the tree, bottom-up and top-down, keeping both the content and context representation for all the nodes of the tree.