no code implementations • EMNLP (WNUT) 2020 • Luca Molteni, Mittul Singh, Juho Leinonen, Katri Leino, Mikko Kurimo, Emanuele Della Valle
In this article, we compare two crowdsourcing sources on a dialogue paraphrasing task revolving around a chatbot service.
no code implementations • EURALI (LREC) 2022 • Juho Leinonen, Niko Partanen, Sami Virpioja, Mikko Kurimo
Cross-language forced alignment is a solution for linguists who create speech corpora for very low-resource languages.
1 code implementation • NoDaLiDa 2021 • Juho Leinonen, Sami Virpioja, Mikko Kurimo
Forced alignment is an effective process to speed up linguistic research.
no code implementations • 5 Jul 2024 • Charles Koutcheme, Nicola Dainese, Arto Hellas, Sami Sarsa, Juho Leinonen, Syed Ashraf, Paul Denny
The emergence of large language models (LLMs) has transformed research and practice in a wide range of domains.
no code implementations • 28 May 2024 • James Prather, Brent Reeves, Juho Leinonen, Stephen MacNeil, Arisoa S. Randrianasolo, Brett Becker, Bailey Kimmel, Jared Wright, Ben Briggs
Here we replicate a previous study that examined novice programming problem solving behavior and extend it by incorporating GenAI tools.
1 code implementation • 8 May 2024 • Charles Koutcheme, Nicola Dainese, Sami Sarsa, Arto Hellas, Juho Leinonen, Paul Denny
Inspired by recent work that has utilised very powerful LLMs, such as GPT-4, to evaluate the outputs produced by less powerful models, we conduct an automated analysis of the quality of the feedback produced by several open source models using a dataset from an introductory programming course.
1 code implementation • 8 May 2024 • Charles Koutcheme, Nicola Dainese, Sami Sarsa, Juho Leinonen, Arto Hellas, Paul Denny
The emergence of large language models (LLMs) has sparked enormous interest due to their potential application across a range of educational tasks.
no code implementations • 14 Mar 2024 • Seth Bernstein, Paul Denny, Juho Leinonen, Lauren Kan, Arto Hellas, Matt Littlefield Sami Sarsa, Stephen MacNeil
Grasping complex computing concepts often poses a challenge for students who struggle to anchor these new ideas to familiar experiences and understandings.
no code implementations • 19 Jan 2024 • James Prather, Paul Denny, Juho Leinonen, David H. Smith IV, Brent N. Reeves, Stephen MacNeil, Brett A. Becker, Andrew Luxton-Reilly, Thezyrie Amarouche, Bailey Kimmel
In this paper, we propose a new way to teach programming with Prompt Problems.
no code implementations • 27 Nov 2023 • Stephen MacNeil, Paul Denny, Andrew Tran, Juho Leinonen, Seth Bernstein, Arto Hellas, Sami Sarsa, Joanne Kim
Unlike syntax errors, for which a compiler or interpreter can issue a message, logic errors can be subtle.
no code implementations • 1 Oct 2023 • James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Peterson, Raymond Pettit, Brent N. Reeves, Jaromir Savelka
Second, we report the findings of a survey of computing students and instructors from across 20 countries, capturing prevailing attitudes towards LLMs and their use in computing education contexts.
1 code implementation • 19 Sep 2023 • Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu
When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.
no code implementations • 31 Jul 2023 • Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, Brent N. Reeves
In parallel with this shift, a new essential skill is emerging -- the ability to construct good prompts for code-generating models.
no code implementations • 18 Jun 2023 • Paul Denny, Hassan Khosravi, Arto Hellas, Juho Leinonen, Sami Sarsa
In this study, we investigated the potential for LLMs to produce learning resources in an introductory programming context, by comparing the quality of the resources generated by an LLM with those created by students as part of a learnersourcing activity.
no code implementations • 9 Jun 2023 • Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, Juha Sorva
At the same time, the results highlight the unreliability of LLMs: LLMs make some of the same mistakes that students do, perhaps especially when formatting output as required by automated assessment systems.
no code implementations • 5 Jun 2023 • Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio Santos, Sami Sarsa
The computing education community has a rich history of pedagogical innovation designed to support students in introductory courses, and to support teachers in facilitating student learning.
no code implementations • 8 Apr 2023 • Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, Arto Hellas
In this paper, we explore the potential of LLMs in generating explanations that can serve as examples to scaffold students' ability to understand and explain code.
no code implementations • 5 Apr 2023 • James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, Eddie Antonio Santos
Recent developments in deep learning have resulted in code-generation models that produce source code from natural language and code-based prompts with high accuracy.
no code implementations • 20 Oct 2022 • Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, Brett A. Becker
Large language models can be used to create useful and novice-friendly enhancements to programming error messages that sometimes surpass the original programming error messages in interpretability and actionability.
no code implementations • 3 Jun 2022 • Sami Sarsa, Paul Denny, Arto Hellas, Juho Leinonen
Our analysis suggests that there is significant value in massive generative machine learning models as a tool for instructors, although there remains a need for some oversight to ensure the quality of the generated content before it is delivered to students.
no code implementations • 30 Dec 2021 • Sami Sarsa, Juho Leinonen, Arto Hellas
To evaluate how different aspects of DLKT models influence model performance, we test input and output layer variations found in the compared models that are independent of the main architectures.
1 code implementation • 19 Aug 2020 • Katri Leino, Juho Leinonen, Mittul Singh, Sami Virpioja, Mikko Kurimo
Using this corpus, we also construct a retrieval-based evaluation task for Finnish chatbot development.