no code implementations • 29 Aug 2024 • Anj Simmons, Scott Barnett, Anupam Chaudhuri, Sankhya Singh, Shangeetha Sivasothy
Interpretable models are important, but what happens when the model is updated on new training data?
no code implementations • 17 Jun 2024 • Scott Barnett, Zac Brannelly, Stefanus Kurniawan, Sheng Wong
This study extends this concept to the integration of LLMs within Retrieval-Augmented Generation (RAG) pipelines, which aim to improve accuracy and relevance by leveraging external corpus data for information retrieval.
no code implementations • 16 Jan 2024 • Zafaryab Rasool, Scott Barnett, David Willie, Stefanus Kurniawan, Sherwin Balugo, Srikanth Thudumu, Mohamed Abdelrazek
Our novel approach uses the reasoning capabilities of LLMs to 1) adapt queries to the domain, 2) synthesise subtle variations to queries, and 3) evaluate the synthesised test dataset.
no code implementations • 12 Jan 2024 • Hala Abdelkader, Mohamed Abdelrazek, Scott Barnett, Jean-Guy Schneider, Priya Rani, Rajesh Vasa
In this paper, we introduce ML-On-Rails, a protocol designed to safeguard ML models, establish a well-defined endpoint interface for different ML tasks, and clear communication between ML providers and ML consumers (software engineers).
no code implementations • 14 Nov 2023 • Zafaryab Rasool, Stefanus Kurniawan, Sherwin Balugo, Scott Barnett, Rajesh Vasa, Courtney Chesser, Benjamin M. Hampstead, Sylvie Belleville, Kon Mouzakis, Alex Bahar-Fuchs
In this paper, we specifically focus on this underexplored context and conduct empirical analysis of LLMs (GPT-4 and GPT-3. 5) on question types, including single-choice, yes-no, multiple-choice, and number extraction questions from documents in zero-shot setting.
no code implementations • 26 May 2023 • Jai Kannan, Scott Barnett, Anj Simmons, Taylan Selvi, Luis Cruz
Deep learning models have become essential in software engineering, enabling intelligent features like image captioning and document generation.
no code implementations • 20 Sep 2022 • Tuan Dung Lai, Anj Simmons, Scott Barnett, Jean-Guy Schneider, Rajesh Vasa
Objective: Our objective is to investigate whether there is a discrepancy in the distribution of resolution time between ML and non-ML issues and whether certain categories of ML issues require a longer time to resolve based on real issue reports in open-source applied ML projects.
no code implementations • 8 May 2022 • Jai Kannan, Scott Barnett, Luís Cruz, Anj Simmons, Akash Agarwal
In our approach we attempt to resolve this problem by exploring the use of context which includes i) purpose of the source code, ii) technical domain, iii) problem domain, iv) team norms, v) operational environment, and vi) development lifecycle stage to provide contextualised error reporting for code analysis.
no code implementations • 27 May 2020 • Alex Cummaudo, Scott Barnett, Rajesh Vasa, John Grundy, Mohamed Abdelrazek
Intelligent services provide the power of AI to developers via simple RESTful API endpoints, abstracting away many complexities of machine learning.
no code implementations • 28 Jan 2020 • Alex Cummaudo, Rajesh Vasa, Scott Barnett, John Grundy, Mohamed Abdelrazek
The objective of this study is to determine the various pain-points developers face when implementing systems that rely on the most mature of these intelligent services, specifically those that provide computer vision.