Search Results for author: Kim Hazelwood

Found 12 papers, 4 papers with code

BenchDirect: A Directed Language Model for Compiler Benchmarks

no code implementations2 Mar 2023 Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather

We improve this with BenchDirect which utilizes a directed LM that infills programs by jointly observing source code context and the compiler features that are targeted.

Active Learning Language Modelling

BenchPress: A Deep Active Benchmark Generator

1 code implementation13 Aug 2022 Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather

We develop BenchPress, the first ML benchmark generator for compilers that is steerable within feature space representations of source code.

Active Learning

Using Python for Model Inference in Deep Learning

no code implementations1 Apr 2021 Zachary DeVito, Jason Ansel, Will Constable, Michael Suo, Ailing Zhang, Kim Hazelwood

We evaluate our design on a suite of popular PyTorch models on Github, showing how they can be packaged in our inference format, and comparing their performance to TorchScript.

Model extraction

Understanding Training Efficiency of Deep Learning Recommendation Models at Scale

no code implementations11 Nov 2020 Bilge Acun, Matthew Murphy, Xiaodong Wang, Jade Nie, Carole-Jean Wu, Kim Hazelwood

The use of GPUs has proliferated for machine learning workflows and is now considered mainstream for many deep learning models.

Exploiting Parallelism Opportunities with Deep Learning Frameworks

1 code implementation13 Aug 2019 Yu Emma Wang, Carole-Jean Wu, Xiaodong Wang, Kim Hazelwood, David Brooks

State-of-the-art machine learning frameworks support a wide variety of design features to enable a flexible machine learning programming interface and to ease the programmability burden on machine learning developers.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.