Search Results for author: Qiming Bao

Found 10 papers, 7 papers with code

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

1 code implementation19 Sep 2023 Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu

When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.

Explanation Generation Language Modelling +2

Large Language Models Are Not Strong Abstract Reasoners

1 code implementation31 May 2023 Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie

We perform extensive evaluations of state-of-the-art LLMs, showing that they currently achieve very limited performance in contrast with other natural language tasks, even when applying techniques that have been shown to improve performance on other NLP tasks.

Common Sense Reasoning Memorization +1

Input-length-shortening and text generation via attention values

no code implementations14 Mar 2023 Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock

Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints.

Conditional Text Generation text-classification +1

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

1 code implementation28 Jul 2022 Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu

In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.

AbductionRules: Training Transformers to Explain Unexpected Inputs

1 code implementation Findings (ACL) 2022 Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock

Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability.

Common Sense Reasoning Logical Reasoning

Relating Blindsight and AI: A Review

no code implementations9 Dec 2021 Joshua Bensemann, Qiming Bao, Gaël Gendron, Tim Hartill, Michael Witbrock

If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks.

DeepQR: Neural-based Quality Ratings for Learnersourced Multiple-Choice Questions

no code implementations19 Nov 2021 Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu

We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform.

Contrastive Learning Multiple-choice

Cannot find the paper you are looking for? You can Submit a new open access paper.