Search Results for author: Daniel Rodriguez-Cardenas

Found 2 papers, 0 papers with code

Benchmarking Causal Study to Interpret Large Language Models for Source Code

no code implementations23 Aug 2023 Daniel Rodriguez-Cardenas, David N. Palacio, Dipin Khati, Henry Burke, Denys Poshyvanyk

We illustrate the insights of our benchmarking strategy by conducting a case study on the performance of ChatGPT under distinct prompt engineering methods.

Benchmarking Causal Inference +4

Evaluating and Explaining Large Language Models for Code Using Syntactic Structures

no code implementations7 Aug 2023 David N Palacio, Alejandro Velasco, Daniel Rodriguez-Cardenas, Kevin Moran, Denys Poshyvanyk

To this end, this paper introduces ASTxplainer, an explainability method specific to LLMs for code that enables both new methods for LLM evaluation and visualizations of LLM predictions that aid end-users in understanding model predictions.

Cannot find the paper you are looking for? You can Submit a new open access paper.