Search Results for author: Hugh Leather

Found 21 papers, 9 papers with code

LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers

no code implementations18 Mar 2024 Massinissa Merouani, Khaled Afif Boudaoud, Iheb Nassim Aouadj, Nassim Tchoulak, Islem Kara Bernou, Hamza Benyamina, Fatima Benbouzid-Si Tayeb, Karima Benatchba, Hugh Leather, Riyadh Baghdadi

In this paper, we introduce LOOPer, the first polyhedral autoscheduler that uses a deep-learning based cost model and covers a large set of affine transformations and programs.

Compiler generated feedback for Large Language Models

no code implementations18 Mar 2024 Dejan Grubisic, Chris Cummins, Volker Seeker, Hugh Leather

We introduce a novel paradigm in compiler optimization powered by Large Language Models with compiler feedback to optimize the code size of LLVM assembly.

Compiler Optimization

Priority Sampling of Large Language Models for Compilers

no code implementations28 Feb 2024 Dejan Grubisic, Chris Cummins, Volker Seeker, Hugh Leather

Large language models show great potential in generating and optimizing code.

CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution

no code implementations5 Jan 2024 Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, Sida I. Wang

The best setup, GPT-4 with chain of thought (CoT), achieves a pass@1 of 75% and 81% on input and output prediction, respectively.

Sieve: Multimodal Dataset Pruning Using Image Captioning Models

1 code implementation3 Oct 2023 Anas Mahmoud, Mostafa Elhoushi, Amro Abbas, Yu Yang, Newsha Ardalani, Hugh Leather, Ari Morcos

We propose a pruning signal, Sieve, that employs synthetic captions generated by image-captioning models pretrained on small, diverse, and well-aligned image-text pairs to evaluate the alignment of noisy image-text pairs.

Image Captioning Language Modelling +1

BenchDirect: A Directed Language Model for Compiler Benchmarks

no code implementations2 Mar 2023 Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather

We improve this with BenchDirect which utilizes a directed LM that infills programs by jointly observing source code context and the compiler features that are targeted.

Active Learning Language Modelling

Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning

no code implementations21 Dec 2022 Chris Lengerich, Gabriel Synnaeve, Amy Zhang, Hugh Leather, Kurt Shuster, François Charton, Charysse Redwood

Traditional approaches to RL have focused on learning decision policies directly from episodic decisions, while slowly and implicitly learning the semantics of compositional representations needed for generalization.

Few-Shot Learning Language Modelling +2

BenchPress: A Deep Active Benchmark Generator

1 code implementation13 Aug 2022 Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather

We develop BenchPress, the first ML benchmark generator for compilers that is steerable within feature space representations of source code.

Active Learning

Code Translation with Compiler Representations

1 code implementation30 Jun 2022 Marc Szafraniec, Baptiste Roziere, Hugh Leather, Francois Charton, Patrick Labatut, Gabriel Synnaeve

Here we propose to augment code translation with IRs, specifically LLVM IR, with results on the C++, Java, Rust, and Go languages.

Code Translation Machine Translation +2

LoopStack: a Lightweight Tensor Algebra Compiler Stack

1 code implementation2 May 2022 Bram Wasti, José Pablo Cambronero, Benoit Steiner, Hugh Leather, Aleksandar Zlateski

We present LoopStack, a domain specific compiler stack for tensor operations, composed of a frontend, LoopTool, and an efficient optimizing code generator, LoopNest.

BIG-bench Machine Learning

CompilerGym: Robust, Performant Compiler Optimization Environments for AI Research

1 code implementation17 Sep 2021 Chris Cummins, Bram Wasti, Jiadong Guo, Brandon Cui, Jason Ansel, Sahir Gomez, Somya Jain, Jia Liu, Olivier Teytaud, Benoit Steiner, Yuandong Tian, Hugh Leather

What is needed is an easy, reusable experimental infrastructure for real world compiler optimization tasks that can serve as a common benchmark for comparing techniques, and as a platform to accelerate progress in the field.

Compiler Optimization OpenAI Gym

Using Graph Neural Networks to model the performance of Deep Neural Networks

no code implementations27 Aug 2021 Shikhar Singh, Benoit Steiner, James Hegarty, Hugh Leather

State-of-the-art deep-learning compilers like TVM and Halide incorporate a learning-based performance model to search the space of valid implementations of a given deep learning algorithm.

Value Function Based Performance Optimization of Deep Learning Workloads

no code implementations30 Nov 2020 Benoit Steiner, Chris Cummins, Horace He, Hugh Leather

As machine learning techniques become ubiquitous, the efficiency of neural network implementations is becoming correspondingly paramount.

Scheduling

Deep Data Flow Analysis

no code implementations21 Nov 2020 Chris Cummins, Hugh Leather, Zacharias Fisches, Tal Ben-Nun, Torsten Hoefler, Michael O'Boyle

Compiler architects increasingly look to machine learning when building heuristics for compiler optimization.

BIG-bench Machine Learning Compiler Optimization

ProGraML: Graph-based Deep Learning for Program Optimization and Analysis

2 code implementations23 Mar 2020 Chris Cummins, Zacharias V. Fisches, Tal Ben-Nun, Torsten Hoefler, Hugh Leather

We introduce ProGraML - Program Graphs for Machine Learning - a novel graph-based program representation using a low level, language agnostic, and portable format; and machine learning models capable of performing complex downstream tasks over these graphs.

BIG-bench Machine Learning

Iterative compilation on mobile devices

1 code implementation9 Nov 2015 Paschalis Mpeis, Pavlos Petoumenos, Hugh Leather

Replaying the targeted functions allows us to evaluate the effectiveness of each set of optimizations for the actual way the user interacts with the application.

Programming Languages

Autotuning OpenCL Workgroup Size for Stencil Patterns

1 code implementation8 Nov 2015 Chris Cummins, Pavlos Petoumenos, Michel Steuwer, Hugh Leather

Selecting an appropriate workgroup size is critical for the performance of OpenCL kernels, and requires knowledge of the underlying hardware, the data being operated on, and the implementation of the kernel.

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.