Search Results for author: Darren Key

Found 3 papers, 1 papers with code

HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis

1 code implementation29 Apr 2024 Andy He, Darren Key, Mason Bulling, Andrew Chang, Skyler Shapiro, Everett Lee

Graphics Processing Units (GPUs) have become the leading hardware accelerator for deep learning applications and are used widely in training and inference of transformers; transformers have achieved state-of-the-art performance in many areas of machine learning and are especially used in most modern Large Language Models (LLMs).

Edge-computing

WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment

no code implementations19 Feb 2024 Hao Tang, Darren Key, Kevin Ellis

We give a model-based agent that builds a Python program representing its knowledge of the world based on its interactions with the environment.

Program Synthesis

Toward Trustworthy Neural Program Synthesis

no code implementations29 Sep 2022 Darren Key, Wen-Ding Li, Kevin Ellis

We develop an approach to estimate the probability that a program sampled from a large language model is correct.

Language Modelling Large Language Model +1

Cannot find the paper you are looking for? You can Submit a new open access paper.