Search Results for author: Paul Denny

Found 26 papers, 5 papers with code

Large Language Models Meet User Interfaces: The Case of Provisioning Feedback

no code implementations17 Apr 2024 Stanislav Pozdniakov, Jonathan Brazil, Solmaz Abdi, Aneesha Bakharia, Shazia Sadiq, Dragan Gasevic, Paul Denny, Hassan Khosravi

Incorporating Generative AI (GenAI) and Large Language Models (LLMs) in education can enhance teaching efficiency and enrich student learning.

"Like a Nesting Doll": Analyzing Recursion Analogies Generated by CS Students using Large Language Models

no code implementations14 Mar 2024 Seth Bernstein, Paul Denny, Juho Leinonen, Lauren Kan, Arto Hellas, Matt Littlefield Sami Sarsa, Stephen MacNeil

Grasping complex computing concepts often poses a challenge for students who struggle to anchor these new ideas to familiar experiences and understandings.

Generative AI for Education (GAIED): Advances, Opportunities, and Challenges

no code implementations2 Feb 2024 Paul Denny, Sumit Gulwani, Neil T. Heffernan, Tanja Käser, Steven Moore, Anna N. Rafferty, Adish Singla

This survey article has grown out of the GAIED (pronounced "guide") workshop organized by the authors at the NeurIPS 2023 conference.

A Systematic Review of Aspect-based Sentiment Analysis (ABSA): Domains, Methods, and Trends

no code implementations16 Nov 2023 Yan Cathy Hua, Paul Denny, Katerina Taskova, Jörg Wicker

This review is one of the largest SLRs on ABSA, and also, to our knowledge, the first that systematically examines the trends and inter-relations among ABSA research and data distribution across domains and solution paradigms and approaches.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA)

AI-TA: Towards an Intelligent Question-Answer Teaching Assistant using Open-Source LLMs

no code implementations5 Nov 2023 Yann Hicke, Anmol Agarwal, Qianou Ma, Paul Denny

Responding to the thousands of student questions on online QA platforms each semester has a considerable human cost, particularly in computing courses with rapidly growing enrollments.

Question Answering Retrieval

Efficient Classification of Student Help Requests in Programming Courses Using Large Language Models

no code implementations31 Oct 2023 Jaromir Savelka, Paul Denny, Mark Liffiton, Brad Sheese

This study evaluates the performance of the GPT-3. 5 and GPT-4 models for classifying help requests from students in an introductory programming class.

The Robots are Here: Navigating the Generative AI Revolution in Computing Education

no code implementations1 Oct 2023 James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Peterson, Raymond Pettit, Brent N. Reeves, Jaromir Savelka

Second, we report the findings of a survey of computing students and instructors from across 20 countries, capturing prevailing attitudes towards LLMs and their use in computing education contexts.

Ethics

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

1 code implementation19 Sep 2023 Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu

When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.

Explanation Generation Language Modelling +2

Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators

no code implementations31 Jul 2023 Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves

In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models.

Can We Trust AI-Generated Educational Content? Comparative Analysis of Human and AI-Generated Learning Resources

no code implementations18 Jun 2023 Paul Denny, Hassan Khosravi, Arto Hellas, Juho Leinonen, Sami Sarsa

In this study, we investigated the potential for LLMs to produce learning resources in an introductory programming context, by comparing the quality of the resources generated by an LLM with those created by students as part of a learnersourcing activity.

Learnersourcing in the Age of AI: Student, Educator and Machine Partnerships for Content Creation

no code implementations10 Jun 2023 Hassan Khosravi, Paul Denny, Steven Moore, John Stamper

Engaging students in creating novel content, also referred to as learnersourcing, is increasingly recognised as an effective approach to promoting higher-order learning, deeply engaging students with course material and developing large repositories of content suitable for personalized learning.

Computing Education in the Era of Generative AI

no code implementations5 Jun 2023 Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio Santos, Sami Sarsa

The computing education community has a rich history of pedagogical innovation designed to support students in introductory courses, and to support teachers in facilitating student learning.

Code Generation

Comparing Code Explanations Created by Students and Large Language Models

no code implementations8 Apr 2023 Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, Arto Hellas

In this paper, we explore the potential of LLMs in generating explanations that can serve as examples to scaffold students' ability to understand and explain code.

"It's Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers

no code implementations5 Apr 2023 James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, Eddie Antonio Santos

Recent developments in deep learning have resulted in code-generation models that produce source code from natural language and code-based prompts with high accuracy.

Code Generation

Many bioinformatics programming tasks can be automated with ChatGPT

1 code implementation7 Mar 2023 Stephen R. Piccolo, Paul Denny, Andrew Luxton-Reilly, Samuel Payne, Perry G. Ridge

However, despite a variety of educational efforts, learning to write code can be a challenging endeavor for both researchers and students in life science disciplines.

Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language

no code implementations27 Oct 2022 Paul Denny, Viraj Kumar, Nasser Giacaman

GitHub Copilot is an artificial intelligence model for automatically generating source code from natural language problem descriptions.

Prompt Engineering

Using Large Language Models to Enhance Programming Error Messages

no code implementations20 Oct 2022 Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, Brett A. Becker

Large language models can be used to create useful and novice-friendly enhancements to programming error messages that sometimes surpass the original programming error messages in interpretability and actionability.

Automatic Generation of Programming Exercises and Code Explanations using Large Language Models

no code implementations3 Jun 2022 Sami Sarsa, Paul Denny, Arto Hellas, Juho Leinonen

Our analysis suggests that there is significant value in massive generative machine learning models as a tool for instructors, although there remains a need for some oversight to ensure the quality of the generated content before it is delivered to students.

Language Modelling Large Language Model +1

DeepQR: Neural-based Quality Ratings for Learnersourced Multiple-Choice Questions

no code implementations19 Nov 2021 Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu

We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform.

Contrastive Learning Multiple-choice

An expanded evaluation of protein function prediction methods shows an improvement in accuracy

1 code implementation3 Jan 2016 Yuxiang Jiang, Tal Ronnen Oron, Wyatt T Clark, Asma R Bankapur, Daniel D'Andrea, Rosalba Lepore, Christopher S Funk, Indika Kahanda, Karin M Verspoor, Asa Ben-Hur, Emily Koo, Duncan Penfold-Brown, Dennis Shasha, Noah Youngs, Richard Bonneau, Alexandra Lin, Sayed ME Sahraeian, Pier Luigi Martelli, Giuseppe Profiti, Rita Casadio, Renzhi Cao, Zhaolong Zhong, Jianlin Cheng, Adrian Altenhoff, Nives Skunca, Christophe Dessimoz, Tunca Dogan, Kai Hakala, Suwisa Kaewphan, Farrokh Mehryary, Tapio Salakoski, Filip Ginter, Hai Fang, Ben Smithers, Matt Oates, Julian Gough, Petri Törönen, Patrik Koskinen, Liisa Holm, Ching-Tai Chen, Wen-Lian Hsu, Kevin Bryson, Domenico Cozzetto, Federico Minneci, David T Jones, Samuel Chapman, Dukka B K. C., Ishita K Khan, Daisuke Kihara, Dan Ofer, Nadav Rappoport, Amos Stern, Elena Cibrian-Uhalte, Paul Denny, Rebecca E Foulger, Reija Hieta, Duncan Legge, Ruth C Lovering, Michele Magrane, Anna N Melidoni, Prudence Mutowo-Meullenet, Klemens Pichler, Aleksandra Shypitsyna, Biao Li, Pooya Zakeri, Sarah ElShal, Léon-Charles Tranchevent, Sayoni Das, Natalie L Dawson, David Lee, Jonathan G Lees, Ian Sillitoe, Prajwal Bhat, Tamás Nepusz, Alfonso E Romero, Rajkumar Sasidharan, Haixuan Yang, Alberto Paccanaro, Jesse Gillis, Adriana E Sedeño-Cortés, Paul Pavlidis, Shou Feng, Juan M Cejuela, Tatyana Goldberg, Tobias Hamp, Lothar Richter, Asaf Salamov, Toni Gabaldon, Marina Marcet-Houben, Fran Supek, Qingtian Gong, Wei Ning, Yuanpeng Zhou, Weidong Tian, Marco Falda, Paolo Fontana, Enrico Lavezzo, Stefano Toppo, Carlo Ferrari, Manuel Giollo, Damiano Piovesan, Silvio Tosatto, Angela del Pozo, José M Fernández, Paolo Maietta, Alfonso Valencia, Michael L Tress, Alfredo Benso, Stefano Di Carlo, Gianfranco Politano, Alessandro Savino, Hafeez Ur Rehman, Matteo Re, Marco Mesiti, Giorgio Valentini, Joachim W Bargsten, Aalt DJ van Dijk, Branislava Gemovic, Sanja Glisic, Vladmir Perovic, Veljko Veljkovic, Nevena Veljkovic, Danillo C Almeida-e-Silva, Ricardo ZN Vencio, Malvika Sharan, Jörg Vogel, Lakesh Kansakar, Shanshan Zhang, Slobodan Vucetic, Zheng Wang, Michael JE Sternberg, Mark N Wass, Rachael P Huntley, Maria J Martin, Claire O'Donovan, Peter N. Robinson, Yves Moreau, Anna Tramontano, Patricia C Babbitt, Steven E Brenner, Michal Linial, Christine A Orengo, Burkhard Rost, Casey S Greene, Sean D Mooney, Iddo Friedberg, Predrag Radivojac

To review progress in the field, the analysis also compared the best methods participating in CAFA1 to those of CAFA2.

Quantitative Methods

Cannot find the paper you are looking for? You can Submit a new open access paper.