Search Results for author: Scott Barnett

Found 8 papers, 0 papers with code

LLMs for Test Input Generation for Semantic Caches

no code implementations16 Jan 2024 Zafaryab Rasool, Scott Barnett, David Willie, Stefanus Kurniawan, Sherwin Balugo, Srikanth Thudumu, Mohamed Abdelrazek

Our novel approach uses the reasoning capabilities of LLMs to 1) adapt queries to the domain, 2) synthesise subtle variations to queries, and 3) evaluate the synthesised test dataset.

Text Generation

ML-On-Rails: Safeguarding Machine Learning Models in Software Systems A Case Study

no code implementations12 Jan 2024 Hala Abdelkader, Mohamed Abdelrazek, Scott Barnett, Jean-Guy Schneider, Priya Rani, Rajesh Vasa

In this paper, we introduce ML-On-Rails, a protocol designed to safeguard ML models, establish a well-defined endpoint interface for different ML tasks, and clear communication between ML providers and ML consumers (software engineers).

Evaluating LLMs on Document-Based QA: Exact Answer Selection and Numerical Extraction using Cogtale dataset

no code implementations14 Nov 2023 Zafaryab Rasool, Stefanus Kurniawan, Sherwin Balugo, Scott Barnett, Rajesh Vasa, Courtney Chesser, Benjamin M. Hampstead, Sylvie Belleville, Kon Mouzakis, Alex Bahar-Fuchs

In this paper, we specifically focus on this underexplored context and conduct empirical analysis of LLMs (GPT-4 and GPT-3. 5) on question types, including single-choice, yes-no, multiple-choice, and number extraction questions from documents in zero-shot setting.

Answer Selection Information Retrieval +2

Green Runner: A tool for efficient model selection from model repositories

no code implementations26 May 2023 Jai Kannan, Scott Barnett, Anj Simmons, Taylan Selvi, Luis Cruz

Deep learning models have become essential in software engineering, enabling intelligent features like image captioning and document generation.

Image Captioning Language Modelling +2

Comparative analysis of real bugs in open-source Machine Learning projects -- A Registered Report

no code implementations20 Sep 2022 Tuan Dung Lai, Anj Simmons, Scott Barnett, Jean-Guy Schneider, Rajesh Vasa

Objective: Our objective is to investigate whether there is a discrepancy in the distribution of resolution time between ML and non-ML issues and whether certain categories of ML issues require a longer time to resolve based on real issue reports in open-source applied ML projects.

MLSmellHound: A Context-Aware Code Analysis Tool

no code implementations8 May 2022 Jai Kannan, Scott Barnett, Luís Cruz, Anj Simmons, Akash Agarwal

In our approach we attempt to resolve this problem by exploring the use of context which includes i) purpose of the source code, ii) technical domain, iii) problem domain, iv) team norms, v) operational environment, and vi) development lifecycle stage to provide contextualised error reporting for code analysis.

BIG-bench Machine Learning

Beware the evolving 'intelligent' web service! An integration architecture tactic to guard AI-first components

no code implementations27 May 2020 Alex Cummaudo, Scott Barnett, Rajesh Vasa, John Grundy, Mohamed Abdelrazek

Intelligent services provide the power of AI to developers via simple RESTful API endpoints, abstracting away many complexities of machine learning.

Interpreting Cloud Computer Vision Pain-Points: A Mining Study of Stack Overflow

no code implementations28 Jan 2020 Alex Cummaudo, Rajesh Vasa, Scott Barnett, John Grundy, Mohamed Abdelrazek

The objective of this study is to determine the various pain-points developers face when implementing systems that rely on the most mature of these intelligent services, specifically those that provide computer vision.

Cannot find the paper you are looking for? You can Submit a new open access paper.